As AI systems stretch beyond single packages and monolithic accelerators, the industry’s next hard problem isn’t compute—it’s how everything connects. Eliyan Corporation just made a strong case that it wants to be at the center of that solution.
The high-performance connectivity specialist announced it has raised $50 million in strategic funding from an unusually heavyweight lineup of investors across the AI and compute ecosystem. New backers include AMD, Arm, Coherent, and Meta, joined by returning investors Samsung Catalyst Fund and Intel Capital. The round signals broad, cross-industry confidence in Eliyan’s technology roadmap at a moment when interconnect efficiency is becoming one of the defining constraints of AI system design.
This isn’t just another funding announcement. It’s a coordinated endorsement from companies that design CPUs, GPUs, AI infrastructure, optical systems, and hyperscale platforms—all of whom are facing the same scaling limits from different angles.
Why Interconnect Is the New Battleground
For years, AI performance gains came largely from bigger chips and faster memory. That model is breaking down.
Modern AI systems are increasingly disaggregated: multiple dies in a package, multiple packages on a board, boards across racks, and racks across data centers. Each boundary introduces latency, power loss, and cost. As models grow, those penalties compound.
Eliyan’s core thesis is that the next wave of AI progress depends on scalable, power-efficient connectivity across every level of the system, from die-to-die (D2D) links inside a package to chip-to-chip (C2C) and rack-scale interconnects.
The company’s investors appear to agree.
The participation of AMD, Arm, and Meta—companies with very different roles in the AI stack—suggests that Eliyan’s technology is being viewed not as a niche optimization, but as infrastructure-level IP that could influence future system architectures.
What the $50M Will Accelerate
According to Eliyan, the proceeds will be used to:
- Accelerate manufacturing and qualification of next-generation interconnect IP
- Scale commercialization of its NuLink™ PHY and NuGear™ chiplet families
- Expand ecosystem partnerships across silicon, packaging, and systems
- Support deployments across AI infrastructure, HPC, and edge computing
In other words, this round is about moving from technical leadership to volume adoption—a critical transition for any deep-tech semiconductor company.
NuLink: Attacking the Memory and I/O Wall
At the heart of Eliyan’s portfolio is NuLink™, its high-speed PHY technology designed to break through the memory and I/O limitations that increasingly bottleneck AI accelerators.
The NuLink portfolio spans multiple connectivity domains:
Silicon-Proven 64G Die-to-Die (D2D)
Eliyan’s NuLink™ D2D at 64G is already silicon-proven and deployed—more than two years ahead of broader industry adoption, according to the company. That early execution matters, especially in an industry where standards often lag real-world needs.
This D2D technology supports next-generation memory interconnects, including SPHBM4e, an emerging JEDEC standard capable of supporting HBM5-class bandwidths on standard packaging. By aligning with SPHBM4e early, Eliyan positions itself as a practical enabler of future HBM scaling, not just a standards participant.
In a market where memory bandwidth often caps AI performance, efficient on-package interconnect is no longer optional.
Next-Generation Chip-to-Chip (C2C) Connectivity
Beyond the package, Eliyan is targeting the harder problem: high-bandwidth, energy-efficient chip-to-chip links.
Its roadmap includes:
- NuLink-XS: 32G–64Gbps single-ended C2C connectivity
- NuLink-XD: 224Gbps differential SerDes, with a path to 448G
These technologies are designed to link multiple packages or modules across substrates, boards, or full systems—supporting large-scale, disaggregated AI architectures.
Eliyan claims its NuLink-X family delivers roughly 2× the energy efficiency of alternative solutions, a critical metric as power budgets become the dominant limiter in AI system design.
NuGear: Scale-Up Connectivity for AI Systems
While NuLink focuses on PHY-level connectivity, NuGear™ addresses system-level scale-up networking.
The NuGear chiplet families target 1.6Tbps to 12.8Tbps link bandwidths, aimed squarely at the most demanding AI accelerator and memory expansion architectures. These are environments where:
- Latency directly impacts training efficiency
- Power consumption constrains rack density
- Reliability becomes existential at scale
NuGear is designed to complement optical engines, acknowledging a reality many vendors gloss over: future AI systems will rely on hybrid electrical-optical architectures, and the transition between the two must be tightly optimized.
Strategic Investors, Strategic Signals
The composition of this funding round may be as important as the dollar amount.
- AMD brings perspective from high-performance CPUs and GPUs increasingly built around chiplet architectures.
- Arm underpins much of the world’s compute IP and is deeply invested in scalable, energy-efficient AI platforms.
- Meta operates some of the largest AI infrastructure deployments on the planet, where interconnect efficiency directly affects operating cost.
- Samsung Catalyst Fund and Intel Capital reinforce continuity, signaling confidence in Eliyan’s execution since earlier stages.
Mohamed Awad, EVP of Arm’s Cloud AI Business Unit, framed the investment around ecosystem collaboration—an acknowledgment that no single company can solve AI scaling alone.
Intel Capital’s Srini Ananth highlighted chiplet-based architectures as a foundational shift, not a passing trend—one that requires new thinking about connectivity at every level.
Samsung Catalyst Fund’s Dede Goldschmidt pointed to Eliyan’s momentum since its prior round in 2024, emphasizing execution across both PHY and chiplet businesses.
Taken together, these comments reflect a shared industry concern: interconnect is now as strategic as compute itself.
Competitive Landscape: Crowded, but Unforgiving
Eliyan is not operating in a vacuum. The race to define next-generation AI interconnects includes:
- Traditional SerDes and PHY vendors
- Chiplet ecosystem players
- Optical interconnect specialists
- Hyperscalers developing custom solutions
What differentiates Eliyan is its attempt to span the entire connectivity stack, from die-to-die through rack-scale links, with a consistent focus on energy efficiency and manufacturability.
That breadth is both an opportunity and a risk. Executing across multiple domains requires deep engineering discipline and tight coordination with ecosystem partners. But if successful, it positions Eliyan as a platform provider, not just an IP vendor.
Timing Matters—and This Timing Is Good
The funding comes at a moment when several trends are converging:
- PCIe, CXL, and memory standards are evolving rapidly
- Chiplet-based designs are moving from experimental to mainstream
- AI system power budgets are hitting practical limits
- Optical and electrical interconnect strategies are being rethought
By advancing silicon-proven solutions ahead of full standardization, Eliyan is betting that early movers will shape how systems are actually built, not just how they’re specified on paper.
The Bottom Line
Eliyan’s $50 million strategic funding round is a clear signal that AI’s next scaling frontier is connectivity, and that the industry is willing to back companies tackling that problem head-on.
With support from AMD, Arm, Meta, Samsung, and Intel Capital, Eliyan is emerging as a serious player in the race to define how chiplets, memory, and accelerators talk to each other in the AI systems of tomorrow.
If compute is the engine of AI, interconnect is the circulatory system. Eliyan is betting that making that system faster, leaner, and more scalable will be essential to the next decade of AI progress—and its investors appear to agree.
Power Tomorrow’s Intelligence — Build It with TechEdgeAI











