-

Lambda Expands NVIDIA Collaboration, Large-Scale Deployments, and New AI Infrastructure Offerings

Lambda brings frontier-scale infrastructure to its AI cloud platform

SAN FRANCISCO--(BUSINESS WIRE)--Lambda, the Superintelligence Cloud, today announced at NVIDIA GTC 2026 it is a launch partner for NVIDIA’s Vera CPU platform and NVIDIA STX, the deployment of NVIDIA Quantum-X800 InfiniBand Photonics Co-Packaged Optics (CPO) networking in an AI factory with 10,000+ NVIDIA Blackwell Ultra GPUs, and the launch of Lambda Bare Metal Instances as a core cloud offering for frontier AI workloads.

“Lambda’s mission is to expand humanity’s energy and computational capacity,” said Stephen Balaban, Co-founder and CEO of Lambda. "Today's announcements advance that mission, enabling the world's top AI teams with the infrastructure they need to do their best work."

With custom Olympus CPU cores and up to 1.2 TB/s of memory bandwidth, Vera keeps thousands of sandboxed AI environments running in parallel, ensuring NVIDIA GPUs stay fully utilized across reinforcement learning (RL) and agentic workloads. It gives frontier labs access to a compute architecture designed for the demands of next-generation agentic AI systems.

While the industry is shifting toward agentic AI, long-term memory and the processing of massive context windows are critical bottlenecks in inference. Powered by NVIDIA Vera Rubin, BlueField-4, Spectrum-X Networking, and NVIDIA AI software, NVIDIA STX brings a modular reference architecture for rack-scale AI storage platforms, accelerating inference, analytics, and training through next-generation hardware integration and optimized KV-cache management.

As AI factories scale into the tens of thousands of AI accelerators, network architecture becomes as important as the accelerators themselves. Co-packaged Optics (CPO) networking delivers higher efficiency, longer sustained application runtime, and greater resiliency than traditional pluggable transceivers.

Lambda is leading one of the largest deployments of NVIDIA Quantum-X800 InfiniBand Photonics Co-Packaged Optics switches to date, in an AI factory with 10,000+ NVIDIA Blackwell Ultra GPUs. The deployment builds on Lambda's November announcement of early CPO adoption.

“The race to build AI factories isn’t won on GPU counts alone. Network architecture is what determines whether those systems can perform at scale,” said Dave Salvator, Director of Accelerated Computing at NVIDIA. “Getting this right is what allows AI infrastructure to power services used by hundreds of millions of people around the world.“​​​​​​​​​​​​​​​​

Lambda’s new Bare Metal Instances, paired with custom networking and system-level optimizations, provide infrastructure and research teams with direct hardware access and eliminate virtualization overhead for distributed training workloads.

The new offering strengthens Lambda's full-stack AI infrastructure platform, expanding the tools available to frontier labs, enterprises, and hyperscalers. Infrastructure teams gain complete control over the hardware stack while benefiting from Lambda’s reliability, uptime, and operational expertise.

Today's announcements reflect Lambda’s decade-long collaboration with NVIDIA, as well as the company’s commitment to continuously develop its Superintelligence Cloud: a platform engineered for fast deployment, density, and cooling to meet modern AI demands and maximize the intelligence produced per watt.

About Lambda

Lambda, The Superintelligence Cloud, is a leader in AI cloud infrastructure serving tens of thousands of customers.

Founded in 2012 by machine learning engineers published at NeurIPS and ICCV, Lambda builds supercomputers for AI training and inference.

Our customers range from AI researchers to enterprises and hyperscalers.

Lambda's mission is to make compute as ubiquitous as electricity and give everyone the power of superintelligence. One person, one GPU.

Contacts

Media Contact:
pr@lambda.ai

Lambda


Release Versions

Contacts

Media Contact:
pr@lambda.ai

More News From Lambda

Lambda Closes $1 Billion Senior Secured Credit Facility to Meet Gigawatt-Scale AI Infrastructure Demand

SAN FRANCISCO--(BUSINESS WIRE)--Lambda, the Superintelligence Cloud, today announced the closing of a $1 billion syndicated senior secured credit facility. The financing upsizes Lambda's existing credit facility, originally established at $275 million in August 2025, reflecting the continued confidence in Lambda's growth trajectory, technical leadership, and track record of operating gigawatt-scale AI factories. The multi-tranche facility provides Lambda with committed capital and the flexibili...

Lambda Assembles Leadership Team to Power Gigawatt-Scale AI Infrastructure for the Superintelligence Era

SAN FRANCISCO--(BUSINESS WIRE)--Lambda, the Superintelligence Cloud, today announced an expanded leadership structure assembled to match the scale of the opportunity in front of the company. As demand for AI compute accelerates globally, Lambda has built the strongest leadership team in its history, bringing in new executives with large-scale capital formation and infrastructure deployment experience. Co-founder Stephen Balaban will lead Lambda's technology strategy and product vision full-time...

Lambda Appoints Leonard Speiser as Chief Operating Officer

SAN FRANCISCO--(BUSINESS WIRE)--Lambda, the Superintelligence Cloud, today announced the appointment of Leonard Speiser as Chief Operating Officer (COO). Speiser will lead day-to-day operations and drive the execution of Lambda's strategy to deploy, operate, and own supercomputers used by the world's largest companies. Speiser brings more than a decade of experience founding and operating complex, capital-intensive, and mission-critical technology businesses. As a former co-founder and CEO of C...
Back to Newsroom