Marvell Technology is making a decisive bet on how future AI data centers will be wired. The semiconductor company announced it will acquire XConn Technologies, a specialist in PCIe and CXL switching silicon, in a deal valued at approximately $540 million. The move expands Marvell’s reach in high-performance switching while reinforcing its position in the fast-emerging UALink ecosystem—a critical piece of next-generation AI infrastructure.
The acquisition comes as AI systems rapidly outgrow the single-rack architectures that defined earlier accelerator deployments. Hyperscalers and cloud providers are now designing multi-rack, tightly coupled systems that need ultra-low latency, high-bandwidth interconnects to keep thousands of accelerators operating as a unified compute pool. That shift has turned switching silicon—once a supporting actor—into a central character.
Why XConn Matters Now
XConn brings to Marvell a portfolio of advanced PCIe and CXL switches, along with a seasoned engineering team steeped in high-performance I/O design. The company claims the industry’s highest port-count PCIe 5 and PCIe 6 switching lineup, with products already shipping and next-generation silicon in customer hands.
That timing is critical. PCIe has long been the backbone of server connectivity, but AI workloads are pushing it into new territory. Meanwhile, CXL—originally positioned as a way to make memory more flexible—is becoming essential for memory disaggregation at scale. Together, PCIe and CXL are evolving from plumbing to performance enablers.
For Marvell, XConn fills a key gap. While Marvell already has strong SerDes technology, a broad process roadmap, and deep relationships with hyperscalers, it lacked a comprehensive, in-house PCIe and CXL switching portfolio at the scale demanded by modern AI systems. XConn supplies that missing layer.
UALink: The Bigger Strategic Play
Beyond PCIe and CXL, the acquisition bolsters Marvell’s push into UALink, an open industry standard designed specifically for scale-up accelerator connectivity. UALink aims to solve a growing problem: how to efficiently connect large numbers of XPUs—GPUs, NPUs, and custom accelerators—across racks without the latency penalties of traditional fabrics.
UALink builds on decades of PCIe innovation while introducing enhancements tuned for AI-scale bandwidth, latency, and reach. The goal is to let multiple accelerators behave like one giant processor, enabling more flexible resource sharing and higher utilization—key priorities as AI infrastructure costs continue to climb.
By folding XConn’s switching expertise into its existing UALink team, Marvell is effectively betting that scale-up fabrics, not just scale-out networking, will define the next era of AI data centers. It’s a notable contrast to rivals that remain more focused on Ethernet-based approaches alone.
A Growing Portfolio—and a Growing TAM
The deal also meaningfully expands Marvell’s total addressable market. PCIe switching, once a relatively stable and modest business, is becoming a growth engine as accelerators proliferate. CXL adds another layer of opportunity, especially as enterprises and cloud providers look to pool memory resources to reduce costs and improve performance.
Marvell is positioning itself to offer an unusually complete CXL stack. With its existing CXL memory expansion controllers combined with XConn’s CXL switches, the company says it can deliver one of the industry’s most comprehensive CXL portfolios for AI workloads. That breadth could appeal to system architects looking to reduce vendor complexity as designs grow more intricate.
XConn is already working with more than 20 customers, a sign that demand is not theoretical. Its PCIe 5 and CXL 2.0 switches are in production today, while PCIe 6 and CXL 3.1 parts are sampling—putting Marvell squarely in the middle of the next upgrade cycle.
Financials and Timing
From a financial perspective, the acquisition is structured as roughly 60% cash and 40% stock, with the equity portion tied to Marvell’s 20-day VWAP and expected to total about 2.5 million shares. The transaction is slated to close in early 2026, pending regulatory approvals.
Revenue contribution won’t be immediate. Marvell expects XConn’s products to begin contributing in the second half of fiscal 2027, with the business becoming accretive to non-GAAP earnings around that time. By fiscal 2028, XConn-related revenue is projected to reach approximately $100 million.
That timeline aligns with broader industry cycles. PCIe 6 and CXL 3.1 adoption is expected to accelerate later this decade, particularly as AI systems move beyond today’s GPU clusters toward more modular, composable architectures.
How This Stacks Up Against the Competition
Marvell’s move highlights a broader industry race. NVIDIA has been expanding its NVLink ecosystem and pushing deeper into system-level design. Broadcom continues to invest heavily in networking and custom silicon for hyperscalers. Intel, through CXL, is trying to reassert relevance in memory-centric architectures.
By acquiring XConn and doubling down on UALink, Marvell is carving out a distinct position: not just a networking or SerDes supplier, but a connectivity platform company spanning scale-out, scale-up, memory, and accelerator fabrics. If successful, that strategy could make Marvell a go-to partner for customers designing AI infrastructure from the ground up.
The Bigger Picture
Taken together with Marvell’s pending acquisition of Celestial AI, the XConn deal signals an aggressive expansion into the heart of AI system architecture. Rather than chasing short-term accelerator demand, Marvell is investing in the connective tissue that determines how efficiently those accelerators can work together.
As AI models grow larger and more distributed, that connective tissue may prove just as important as raw compute. If Marvell executes well, XConn could become a cornerstone of its effort to shape how next-generation data centers are built—and who controls the standards that define them.
Power Tomorrow’s Intelligence — Build It with TechEdgeAI










