-

Cerebras Powers Perplexity Sonar with Industry’s Fastest AI Inference

Revolutionizing Search with Unmatched Speed and Efficiency

SUNNYVALE, Calif.--(BUSINESS WIRE)--Cerebras Systems, the pioneer in accelerating generative AI, today announced its pivotal role in powering Sonar, a groundbreaking model optimized for Perplexity search. Built on the robust foundation of Llama 3.3 70B, Sonar represents a significant advancement in answer quality, factuality, and readability, setting new standards for user satisfaction in search technology. The new Sonar search experience, powered by Cerebras, is available to Perplexity Pro users starting today.

"Our partnership with Cerebras has been instrumental in bringing Sonar to life," said Denis Yarats, CTO, Perplexity. "Cerebras' cutting-edge AI inference infrastructure has enabled us to achieve unprecedented speeds and efficiency, setting a new standard for search. We look forward to continuing to work with a company that shares our commitment to innovation and user satisfaction."

**Unparalleled Inference Infrastructure**

At the heart of Sonar's blazing fast performance is Cerebras' state-of-the-art AI inference infrastructure. As the world’s fastest AI inference provider, Cerebras enables Sonar to process 1,200 tokens per second, delivering near-instant answer generation. This unprecedented speed ensures that users receive accurate and relevant information in real-time, transforming the way they search and discover information.

**A New Era in Search Technology**

The launch of Perplexity Sonar powered by Cerebras marks a monumental shift in the search technology landscape. By combining advanced model training with unparalleled inference speed, we are redefining what is possible in terms of answer quality and user experience. Whether you are seeking quick answers to complex questions or need reliable information on the go, Sonar delivers with unmatched precision and speed.

**Commitment to Innovation**

“At Cerebras, we are dedicated to pushing the boundaries of what is possible in AI and machine learning,” said Andrew Feldman, CEO and co-founder, Cerebras. “Our collaboration with Perplexity on Sonar accelerates our vision in bringing fast, accurate, and reliable information to everyone with real-time accessible search.”

**Join Us in This Exciting Journey**

We invite you to experience the future of search with Perplexity Sonar, powered by Cerebras, available now to Pro users to set as the default search model. Whether you are a tech enthusiast, a busy professional, or someone who simply values quick and accurate information, Sonar is designed to meet your needs. Stay tuned for more updates and join us in this exciting journey as we continue to revolutionize the way speed of inference powers new instant AI experiences.

For more information, please visit https://www.perplexity.ai/hub/blog/meet-new-sonar.

About Cerebras Systems

Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to accelerate generative AI by building from the ground up a new class of AI supercomputer. Our flagship product, the CS-3 system, is powered by the world’s largest and fastest commercially available AI processor, our Wafer-Scale Engine-3. CS-3s are quickly and easily clustered together to make the largest AI supercomputers in the world, and make placing models on the supercomputers dead simple by avoiding the complexity of distributed computing. Cerebras Inference delivers breakthrough inference speeds, empowering customers to create cutting-edge AI applications. Leading corporations, research institutions, and governments use Cerebras solutions for the development of pathbreaking proprietary models, and to train open-source models with millions of downloads. Cerebras solutions are available through the Cerebras Cloud and on premise. For further information, visit cerebras.ai or follow us on LinkedIn or X.

Contacts

Cerebras Systems


Release Versions

Contacts

More News From Cerebras Systems

 Cerebras Announces Six New AI Datacenters Across North America and Europe to Deliver Industry’s Largest Dedicated AI Inference Cloud

SUNNYVALE, Calif.--(BUSINESS WIRE)--Cerebras Systems, the pioneer in accelerating generative AI, today announced the launch of six new AI inference datacenters powered by Cerebras Wafer-Scale Engines. These state-of-the-art facilities, equipped with thousands of Cerebras CS-3 systems, are expected to serve over 40 million Llama 70B tokens per second, making Cerebras the world’s #1 provider of high-speed inference and the largest domestic high speed inference cloud. These new datacenters mark a...

Hugging Face Partners with Cerebras to Give Developers Access to Industry’s Fastest AI Inference for Open-Source Models

SUNNYVALE, Calif.--(BUSINESS WIRE)--Cerebras and Hugging Face today announced a new partnership to bring Cerebras Inference to the Hugging Face platform. HuggingFace has integrated Cerebras into HuggingFace Hub, bringing the world’s fastest inference to over five million developers on HuggingFace. Cerebras Inference runs the industry’s most popular models at more than 2,000 tokens/s – 70x faster than leading GPU solutions. Cerebras Inference models including Llama 3.3 70B, will be available to...

Cerebras Bolsters Leadership Team with Appointment of New CISO, EVP of Worldwide Sales, and SVP of AI Cloud and Inference

SUNNYVALE, Calif.--(BUSINESS WIRE)--Cerebras Systems, the pioneer in accelerating generative AI, today announced the appointment of Naor Penso as Chief Information Security Officer (CISO), Alex Varel as Executive Vice President of Worldwide Sales, and Hagay Lupesko as Senior Vice President of AI Cloud and Inference. These strategic hires further strengthen the company's leadership team as it continues to innovate and accelerate the adoption of its groundbreaking AI hardware solutions. "We are t...
Back to Newsroom