L-com

Meeting the Speed and Connectivity Demands of Data Centers

Download White Paper

Executive Summary

Key Takeaways:

  • Global internet traffic and AI workloads are rapidly increasing bandwidth demands, pushing data centers toward 100G, 400G, 800G, and even 1.6T interconnects.
  • Hybrid and multicloud architectures require fast, reliable interconnect between on-prem and cloud environments.
  • Edge computing reduces local latency but increases overall data movement, amplifying demand for high-speed fiber and low-latency links.
  • Fiber selection matters: single-mode for long-reach backbones and multimode for short-reach intra-facility links.

Data centers are held as paragons of computing power and data storage but, just as significantly, they rank among the most active nodes in global data communications networks.

Global internet traffic now totals several zettabytes per year, and the majority of that traffic, by one estimate as much as 95 percent of it, flows through data centers. The compound annual growth rate (CAGR) of internet traffic was commonly estimated to be in the neighborhood of 20 percent, but with recent trends, the common expectation is that growth is likely to accelerate at even higher rates.

These trends have necessitated ceaseless data center evolution, and while news coverage tends to focus on processors and memory systems, data centers require ultra-reliable new data communications technologies with ever-increasing bandwidth and throughput rates able to handle the increasing data traffic. That includes everything from cables and transceivers to power and connectivity solutions.

Key Factors Driving Bandwidth Demand

Increases in global traffic volume (expressed in exabytes, petabytes and, increasingly, zettabytes), data center processing power (increasingly measured in terms of energy consumption, commonly gigawatts for hyperscalers) and data center storage (with the largest facilities storing exabytes) are linked by a number of data communications trends. One of the most significant contributors to network traffic growth and to data center usage is artificial intelligence (AI). Major engines increasingly surface AI summaries on some queries (though traditional results still dominate overall). That these results are pointedly labeled as such makes them only the most publicly visible evidence of the surging use of the technology. AI is transforming almost every endeavor supported by electronics, including finance, manufacturing, healthcare, retail, and agriculture.

Artificial Intelligence and Data-Intensive Workloads

AIs require vast amounts of data for training models and, once trained, those models are capable of analyzing endless voluminous streams of additional data – data that is naturally gathered elsewhere and shipped to data centers to be analyzed using trained models.

The soaring demand for AI is illustrated by projections of the global demand for data center capacity. Available data center capacity is currently about 60 gigawatts (GW). McKinsey projects that data center capacity required by 2030 is likely to grow to somewhere between 171 and 219 GW and, if the most optimistic growth predictions occur, the need might surge to 298 GW.

More data centers are being planned and built, while those currently in operation are adopting technologies that increase the efficiency of running AI workloads, including graphics processing units (GPUs), neural processing units (NPUs), tensor processing units (TPUs), non-volatile memory solid-state drives (NVMe SSDs) and high-bandwidth memory (HBM).

These advanced technologies are brought together in a data center architecture marked by massive parallel processing and high-throughput data exchange between GPUs and servers.

High-Speed Interconnects

Data communications in and out of data centers requires ultra-high-speed interconnects – 100 gigabit Ethernet (100G), 400G, and 800G. Some hyperscalers are moving to 1.6 terabit Ethernet (1.6T). The largest data centers might need 100G or better for internal communications.

Single-Mode vs Multimode Fiber

While we are emphasizing high-speed optical connectors and transceivers, there should be due consideration for optical fiber itself. Single-mode fiber is ideal for long-range, high-speed backbone connections. It uses a single light path, which minimizes signal loss and supports distances exceeding 10 kilometers, making it perfect for linking multiple data centers or connecting to internet service providers.

Multimode fiber is cost-effective for relatively short distances (up to about 150m at 400G, or up to about 400m at 10G). That makes multimode highly appropriate for internal data center networks and other campuses, connecting servers, switches, and storage systems.

Hybrid & Multi-Cloud

Another contributing factor to growing network traffic is the adoption of hybrid and multicloud architectures. The former describes the reliance on a combination of private and public data centers; the latter is the simultaneous use of multiple public data centers.

Increases in data sharing and data transfers are implicit in both hybrid and multicloud environments. As data traverses both cloud and on-prem environments, fast, reliable interconnect is a necessity.

The Edge Effect

Perhaps counter-intuitively, edge computing also contributes to an increase in both network traffic and data center usage, and has its own requirements for communications network performance.

Edge computing by definition means moving compute power away from data centers and closer to where data gets collected, often with the aim of getting results in real time (or close enough to it). Relying on local computational resources means reducing or completely eliminating the need to transmit data and wait for a response.

Edge does reduce latency and eases backhaul traffic, but note that it can still increase aggregate exchange between edge and core due to AI/analytics replication.

Some workloads, notably AI workloads, are still best performed in data centers. Applications at the edge – especially applications that rely on real-time analytics – therefore demand low-latency, high-bandwidth links.

Trends in AI and edge computing amplify each other. AI is capable of processing vastly more data than ever before, which feeds the impulse to collect more and more data that can be used to produce new, actionable insights that can be provided by AI.

Streaming and virtualization also combine to exacerbate data center usage. A practical effect of video streaming and the widespread phenomenon of virtual desktops is that data center loads are becoming increasingly bandwidth-intensive.

Datacom infrastructure

The trends outlined above combine to increase the volumes of data transmitted to data centers, accompanied of course by an increase in the results returned. The process of analyzing the streams of data fed into data centers, however, also involves an extraordinary amount of internal data movement. Reliable, high-speed, high-bandwidth communications is a necessity for data centers’ internal networks as well.

In the enterprise data center market, 25G to 100G is often sufficient. The connector market, on pace to be worth $12 billion by the end of 2025, is dominated by large public data centers that rely heavily on 400G and 800G.

Other data communications support systems include everything from transceivers to cables and, given the critical importance of network reliability, even power supplies (see Table 1). Reliability is critical regardless of the size or scale of the data center.

Table 1: Critical datacom support equipment

Optical Transceivers and Adapters
  • SFP, QSFP, and CFP modules supporting 10G to 800G data rates
  • Adapters and couplers used in fiber interconnects
High-Speed Ethernet Cables
  • Cat6A (10G/100 m), Cat7/7A (ISO Class F/FA, 10G/100 m; limited adoption), Cat8 (25/40G up to 30m)
  • Shielded (F/UTP, S/FTP) options reduce noise in high-density runs
Fiber Optic Cables and Assemblies
  • Single-mode and multimode fiber optic cables for high-bandwidth backbones
  • Patch cables, MTP/MPO assemblies for fast deployment in high-performance networks
Power & Surge Protection
  • Inline and rack-mount surge protectors for sensitive networking equipment
  • Power cords and PDU accessories designed for server racks

Signal interference, network outages, cable damage, and inefficient routing can lead to costly downtime, compromised data, and safety risks. These and other challenges can be reduced to an absolute minimum by choosing the right industrial power and speed connectivity products.

In enterprise data centers, which tend to be physically smaller, cables of 100 meters or less are common. In those cases, 10 gigabit Ethernet (10G) has been commonly used, but while 10G may still suffice for some access links, 25G access (with a 100G spine) is increasingly common.

Category 6 augmented (Cat6a) and Cat7 bulk cables are ideal for enterprise networks requiring robust and high-speed connections. Cat6a cables support data rates up to 10 Gbps at 500 MHz, while Cat7 cables extend performance even further, supporting frequencies up to 600 MHz with enhanced shielding for superior noise resistance.

With improved crosstalk performance and durable construction, Cat6a and Cat7 cables provide the consistent throughput needed for applications such as real-time data processing, high-volume VoIP, HD video conferencing, and virtualization. Note that Cat7 is ISO/IEC Class F, and not recognized by the Telecommunications Industries Association (TIA), so Cat6A + RJ45 dominates in practice.

Surge protection

All data communications networks are vulnerable to electrical surges caused by lightning strikes, power grid switching, or inductive load switching. The transient voltages associated with these events can destroy Ethernet ports, switches, and other sensitive electronics. Industrial surge protectors act as the first line of defense for network infrastructure. These devices clamp high voltages before they reach valuable electronics, protecting everything from Ethernet lines and coax cables to power inputs. 

Every enterprise facility, including data centers, is likely to be equipped with video monitors for safety and security reasons. They need both power and surge protection. In these cases, POE and POE++ surge protection solutions are excellent choices to support video monitoring systems..

Supply Chain Challenges

Rapid growth in any market can tax supply chains. Explosive growth within the data center market has indeed had this effect on data center-centric supply chains. Rapidly evolving needs have outpaced the capacity of legacy manufacturing lines, creating a mismatch between supply and demand. Such mismatches can be remedied by adding production capacity.

Meanwhile distribution and shipping companies must make adjustments of their own. The electronics industry has come to rely on ‘just-in-time’ (JIT) delivery, but JIT inventory management strategies are particularly sensitive to supply chain disruptions. Businesses tend to respond by increasing buffer stocks of mission-critical items, which can exacerbate supply problems for those unable to stock up in time.

Recent geopolitical tensions and deliberate interference with long-standing global trade relationships have made market volatility even worse. The result has been shipping imbalances, port congestion, and rising transportation costs, disrupting global shipping and fulfillment.

Furthermore, shortages of skilled labor across the manufacturing and logistics sectors, has imposed even more drag on the market.

All of this market disruption has consequences for high-speed data communications infrastructure.

There have been global shortages of integrated circuits (ICs), optical transceivers, and raw materials (e.g., copper, plastics). These shortages have led to price increases and longer lead times for Ethernet cables, fiber optics, and transceivers. The high-speed data communications infrastructure market is currently experiencing several supply-and-demand mismatches across the supply chain.

One result of this discrepancy is an increase in spec-driven procurement. In practice this means buyers are increasingly demanding pre-tested, certified, high-performance products. Their ultimate goal is to avoid compatibility and latency issues.

Strategies to Ensure Bandwidth and Performance

While there are persistent challenges, data center operators and the suppliers that serve them are far from powerless. There are multiple strategies and tactics that together will effectively alleviate supply chain disruption so that ongoing bandwidth and high-speed communications concerns can be addressed.

Increasing Production Capacity and Nearshoring – Adding production capacity solves many problems, but doing so still takes time. An associated measure is nearshoring production to improve reliability and reduce geopolitical risk exposure. This too, takes time.

Vendor Diversification and Modular Design – An obvious strategy available to data center operators is to work with multiple vendors and regional suppliers to reduce single-point dependencies.

In rapidly evolving markets, equipment suppliers are continuously innovating – improving existing product performance and characteristics, introducing new features and capabilities, and introducing superior replacements. Vendors are investing in modular, scalable and energy-efficient, high-speed connectivity solutions. There are good reasons to stick with the tried-and-true, but when the tried-and-true is subject to persistent shortages, there’s extra incentive to upgrade.

Long-Term Planning and Strategic Stockpiling – Again, persistent shortages may require a shift in inventory strategies. Strategic stockpiling and long-term purchasing agreements provide some inoculation against shortages.

Building Reliable, High-Bandwidth Data Centers for the Future

AI, edge computing, and other technology trends are combining to create a demand for more data, a demand for more data analysis, and a need to move ever-increasing volumes of data, ever more often.

This naturally increases the need for data communications infrastructure. Rapidly increasing demand is in itself enough to tax a market’s ability to supply what’s needed, but the global technology market is being roiled by geopolitical forces, which is exacerbating supply availability and affecting pricing.

There are measures that data centers can take to weather market volatility, and all of them involve working directly with suppliers.

Metrics critical to the bottom line for every data center are performance and reliability. Data centers are only as performant and reliable as the infrastructure they are built upon.

Download White Paper

References

  1. EdgeOptic https://www.google.com/url?q=https://edgeoptic.com/global-internet-traffic-growth-forecast-looking-forward-from-2024/&sa=D&source=docs&ust=1764183784860803&usg=AOvVaw0UrVXq8kmnwvsUHmrV6a 0y
  2. Cisco Newsroom https://www.google.com/url?q=https://newsroom.cisco.com/c/r/newsroom/en/us/a/y2018/ m02/global-cloud-index-projects-cloud-traffic-to-represent-95-percent-of-total-data-center-traffic-by-2021.html &sa=D&source=docs&ust=1764183784861483&usg=AOvVaw3Y2unwe9J0ryBc-LRiXH6f
  3. McKinsey & Company https://www.google.com/url?q=https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/ai-power-expanding-data-center-capacity-to-meet-growing-demand&sa =D&source=docs&ust=1764183784860487&usg=AOvVaw2i5nxnCpeqg538XGw4sMjD
  4. Cignal AI https://www.google.com/url?q=https://cignal.ai/2025/01/over-20-million-400g-800g-datacom-optical-module-shipments-expected-for-2024/&sa=D&source=docs&ust=1764183784861761&usg=AOvVaw2aNIHQ Ywgkou3K_jXjl76F

Frequently Asked Questions

Bandwidth demand is rising due to global traffic growth, AI training workloads, edge computing, streaming, virtualization, and hybrid cloud architectures—all of which require high-speed data exchange.

Single-mode fiber is ideal for long-distance, high-speed backbone links. Multimode fiber is more cost-effective for short-distance internal data center connections.

AI models process massive datasets and need fast communication between GPUs and servers, requiring 100G–800G links and low-latency optical transceivers.

Edge reduces local latency but increases sync, analytics, and AI inference traffic between edge devices and core data centers—driving demand for high-bandwidth, low-latency links.

Enterprise environments typically use Cat6A (10G), Cat7 (10G+), and multimode fiber for internal links, while hyperscalers use 100G, 400G, and 800G fiber solutions.

Surges caused by lightning, grid switching, or electrical faults can damage switches, servers, and Ethernet ports. Industrial surge protectors protect network uptime and equipment reliability.