As artificial intelligence workloads continue to push data center infrastructure to its limits, the bottleneck is increasingly shifting from compute to connectivity. That’s the message from Marvell Technology, which plans to unveil one of the industry’s most extensive portfolios of AI data center interconnect solutions at OFC 2026, taking place March 15–19 at the Los Angeles Convention Center in Los Angeles.
The semiconductor company says it will present more than 20 demonstrations at its booth, showcasing technologies designed to tackle what it sees as the next major challenge in hyperscale infrastructure: moving massive volumes of AI data efficiently between processors, memory, and distributed clusters.
As AI models grow larger and training workloads span thousands of GPUs, connectivity technologies—from chip-level interconnects to long-distance optical links—are becoming just as important as raw compute power.
Connectivity Becomes the AI Infrastructure Bottleneck
For decades, improvements in CPU and GPU performance defined the pace of computing innovation. But in modern AI data centers, the challenge is often less about processing data and more about moving it quickly enough between compute nodes.
Large-scale AI systems rely on tightly interconnected clusters of accelerators that must exchange data continuously during model training and inference. As those clusters scale to tens of thousands of chips, the underlying networking architecture becomes critical.
Marvell argues that addressing this challenge requires a new generation of specialized semiconductor connectivity solutions capable of delivering higher bandwidth, lower latency, and greater power efficiency.
The company’s strategy focuses on building an end-to-end connectivity stack that spans everything from on-package chip communication to rack-scale optical networking.
What Marvell Is Bringing to OFC
At OFC 2026, Marvell plans to demonstrate the breadth of its connectivity technologies across multiple layers of AI infrastructure.
Among the key technologies being showcased:
3nm Die-to-Die Interconnect
Marvell will demonstrate 40G die-to-die connectivity IP, built on a 3-nanometer process, designed for high-speed communication between chips within advanced packaging environments.
Die-to-die links are particularly important for AI accelerators using high-bandwidth memory (HBM) and multi-chip architectures, where minimizing latency and power consumption is essential.
PCIe 8.0 SerDes
The company will also present PCIe 8.0 SerDes technology capable of running at 256 gigatransfers per second (GT/s).
This next-generation interface aims to help hyperscale data centers move toward higher-bandwidth connections between compute nodes, storage devices, and accelerators.
As AI infrastructure becomes increasingly disaggregated—separating compute, memory, and storage across systems—high-performance interconnect standards like PCIe 8.0 are becoming critical.
CXL-Based Memory Expansion
Marvell’s Structera CXL platform will also be on display, offering Compute Express Link (CXL)-based near-memory acceleration and memory expansion.
CXL has emerged as one of the most promising technologies for addressing the memory bottleneck in AI workloads, enabling processors to access shared pools of memory across the data center.
This approach can dramatically improve resource utilization and system scalability.
1.6 Terabit Optical Connectivity
Optical networking remains a cornerstone of hyperscale infrastructure, and Marvell is pushing bandwidth limits here as well.
The company will showcase Ara T DSP, which it describes as the industry’s first 8×200G transmit-retimed optics (TRO) PAM4 digital signal processor.
The chip supports 1.6-terabit optical connections, delivering improved performance and lower power consumption compared with traditional fully retimed optics.
Next-Generation Optical DSP
Also part of the showcase is Marvell’s 3nm 1.6T PAM4 optical DSP, capable of delivering 200Gbps electrical and optical interfaces for AI scale-out networking.
These technologies are designed to handle the enormous traffic flows generated by distributed AI clusters.
Photonic Fabrics and AI Cluster Scaling
Another highlight of the showcase is the Marvell Photonic Fabric, a platform designed to enable multi-rack optical scaling across AI clusters.
Photonic fabrics use optical links rather than traditional electrical connections to move data between servers and racks. This can dramatically increase bandwidth while reducing power consumption and latency.
As AI training clusters grow to thousands or even tens of thousands of accelerators, optical networking architectures are becoming essential for maintaining performance.
Data Center Networking for AI Workloads
Beyond chip and optical interconnects, Marvell will also showcase its Teralynx switch silicon, designed for high-performance data center switching.
The platform supports 800Gb Ethernet (800GE) networking and includes advanced congestion management features tailored for AI traffic patterns.
AI workloads generate highly synchronized communication bursts across compute nodes, which can easily overwhelm traditional networking architectures.
Switch technologies designed specifically for AI traffic aim to reduce these bottlenecks.
Lower-Cost Optical Interconnects for Data Centers
Marvell will also highlight COLORZ pluggable optics, its optical modules designed for data center interconnect (DCI) applications.
These modules support 800G ZR/ZR+ connectivity across C-band and L-band optical frequencies, offering high-speed inter-data center links while potentially reducing capital costs compared with traditional optical transport systems.
For hyperscalers building global AI infrastructure, reducing networking costs at scale is becoming a key priority.
Observability for AI Infrastructure
Managing large AI clusters requires more than fast hardware—it also requires visibility into network performance.
To address this need, Marvell will showcase RELIANT telemetry platform, which provides end-to-end monitoring, analytics, and predictive insights across the company’s connectivity ecosystem.
Telemetry systems like this allow operators to identify bottlenecks, anticipate failures, and optimize performance in real time.
As AI infrastructure grows more complex, these kinds of observability tools are becoming essential.
A Growing Ecosystem Around AI Connectivity
Marvell’s presence at OFC will extend beyond its own booth.
The company says more than 80 demonstrations across the event floor will feature technologies powered by Marvell silicon through its ecosystem of partners.
That ecosystem approach reflects the complexity of modern AI infrastructure, which relies on collaboration between semiconductor vendors, system manufacturers, hyperscale cloud providers, and optical networking specialists.
Why OFC Matters for the Future of AI Infrastructure
The Optical Fiber Communication Conference and Exhibition, widely known as OFC, is one of the world’s most influential events focused on optical networking and communications technology.
Historically centered on telecom infrastructure, the conference has increasingly become a hub for innovations tied to hyperscale computing and AI data centers.
With AI driving unprecedented demand for bandwidth, companies across the semiconductor and networking industries are racing to develop technologies capable of supporting the next generation of digital infrastructure.
The Bigger Picture: AI Is Redesigning Data Centers
The scale of modern AI clusters is forcing a fundamental redesign of data center architectures.
Training the latest foundation models often requires thousands of GPUs operating in synchronized clusters, exchanging massive volumes of data during each training iteration.
That demand is pushing networking technologies toward terabit-scale optical links, advanced chip interconnects, and highly optimized switching architectures.
Companies like Marvell are betting that connectivity—not just compute—will determine how quickly AI infrastructure can scale in the coming years.
If that prediction proves correct, the quiet plumbing of the data center may soon become one of the most important battlegrounds in the AI era.
Power Tomorrow’s Intelligence — Build It with TechEdgeAI












