There’s a quiet but very real shift happening in the AI ecosystem, and it isn’t about GPUs, model architectures, or agent workflows—it’s about the fabric connecting everything together. And today, Celero Communications just put a very loud marker down in that space with a fresh $100M Series B led by CapitalG, bringing its total funding to $140M and signaling that the “plumbing” of AI infrastructure is now finally getting the investor attention it deserves.
Celero sits in one of the most critical layers of the AI stack: coherent DSPs—digital signal processors purpose-built for optical data movement between AI accelerators and across distributed data centers. As clusters expand from a few racks to sprawling planetary-scale compute fabrics, traditional interconnects become the bottleneck. The industry has already felt that pain: every leap in Transformer scale demands orders of magnitude increases in bandwidth, not small upgrades. GPUs aren’t the problem anymore; everything in between them is.
Their pitch is almost startlingly straightforward: without new optical DSP innovation, hyperscalers, cloud providers, and frontier model labs can’t scale infrastructure efficiently. The company’s coherent DSP platform promises dramatic improvements in bandwidth density, energy efficiency, and total infrastructure cost—three constraints now driving AI infrastructure roadmaps more aggressively than compute FLOPs themselves.
Investors seem convinced. The participation roster reads like a map of high-conviction, deep-tech capital: Sutter Hill Ventures, Valor Equity Partners, Maverick Silicon, Atreides Management—and now Alphabet’s CapitalG, whose presence alone signals Google’s acknowledgment that interconnect is becoming a first-order strategic priority in the race to scale multimodal and agentic workloads.
Leadership also isn’t experimental or speculative. The founders, Nariman Yousefi and Oscar Agazzi, helped architect multiple generations of networking and DSP silicon at Marvell, Inphi, Broadcom, and ClariPhy. In other words: this isn’t a betting-on-theory startup. It’s a veteran team who has shipped real hardware into hyperscale environments and knows exactly where the industry’s pain points are buried.
The timing of this investment aligns with a bigger market turning point. AI infrastructure isn’t just getting larger—it’s becoming *distributed*. Multi-region GPU fabrics, optical networking between model inference endpoints, data-center-to-data-center latency constraints: they all require a new model for coherent interconnect. And that’s precisely where Celero wants to become the default.
There’s also a subtle undercurrent worth noting. This investment suggests the industry may be transitioning from the GPU-only arms race toward a holistic systems-level competition—compute, memory, networking, optics, orchestration, and soon silicon photonics. NVIDIA’s recent moves, including its focus on NVLink and Spectra interconnects, already hinted at this. Celero makes the same case from the independent-component side of the battlefield.
The next generation of AI infrastructure won’t be defined solely by the number of GPUs, but by how fast, how efficiently, and how intelligently those GPUs talk to each other. And companies like Celero now stand at the center of that shift.
Feels like the kind of milestone that, a couple of years from now, we’ll look back on and recognize as a signal—a moment when optical DSP became not a niche, but a foundation of large-scale accelerated computing.