GigaOm Sonar for Vector Databases Positions Vespa as a Leader and Forward Mover For the Second Consecutive Year

TRONDHEIM, Norway--()--Vespa.ai, the creator of a leading platform for building and deploying large-scale, real-time AI applications powered by big data, today announced its recognition as a Leader and Forward Mover in the 2025 GigaOm Sonar for Vector Databases—for the second consecutive year.

The GigaOm report underscores Vespa’s leadership in enabling fast, scalable AI applications. It highlights Vespa’s innovative methods for processing text and structured data, which empower organizations to efficiently search and index vast amounts of data. Vespa offers advanced support for technologies like real-time vector search and binary data processing, delivering unmatched flexibility and cost-efficiency. Vespa Cloud enhances these capabilities by offering pre-built tools and seamless data integration, enabling businesses to unlock deeper insights and provide smarter, faster user experiences.

Andrew Brust, Analyst, GigaOm: “Vespa’s low-latency engine can handle hundreds of thousands of requests per second and is designed for online use cases that involve AI and data. It’s a comprehensive offering in which users define and index data with fields composed of vectors, tensors, unstructured text, and structured data to query across them seamlessly.”

Jon Bratseth, CEO and Founder, Vespa: “We are pleased to be recognized as a leader in this rapidly growing and highly relevant market for the second consecutive year. The GigaOm Sonar report provides valuable insights into the role of vector databases as part of a broader AI solution rather than as standalone technology. This perspective aligns perfectly with our vision for Vespa as a comprehensive platform for building AI applications, seamlessly integrating vector database capabilities and beyond.”

The GigaOm Sonar for Vector Databases can be downloaded here: https://content.vespa.ai/gigaom-report-2025

About Vespa

Vespa.ai is a powerful platform for developing real-time search-based AI applications. Once built, these applications are deployed through Vespa’s large-scale, distributed architecture, which efficiently manages data, inference, and logic for applications handling large datasets and high concurrent query rates. Vespa delivers all the building blocks of an AI application, including vector database, hybrid search, retrieval augmented generation (RAG), natural language processing (NLP), machine learning, and support for large language models (LLM) and vision language models (VLM). It is available as a managed service and open source.

Contacts

Media Contact
Tim Young
timyoung@vespa.ai