ATLANTA--(BUSINESS WIRE)--AnswerRocket, a leader in GenAI-powered analytics, today announced its support for both Gemini LLMs and the Anthropic Claude family of LLMs, further solidifying the company’s commitment to LLM agnosticism. This strategic move empowers AnswerRocket’s customers to select the language model that best fits their unique business needs and seamlessly mix and match models to support diverse use cases.
AnswerRocket’s platform integrates with a wide range of LLM models, enabling enterprises to create conversational AI assistants for data analysis. Supported models include:
- OpenAI: GPT-4o, GPT-4o mini
- Google: Gemini Flash, Gemini Pro (models with function-calling support)
- Anthropic: Claude 3.5 Sonnet, Claude 3 Opus
Why LLM Agnosticism Matters
The AnswerRocket platform is designed to be model-agnostic, allowing businesses to leverage different LLMs to address specific analytical challenges. This approach ensures that companies can optimize their data insights by using the most appropriate model for each task, whether it involves structured or unstructured data. An LLM-agnostic approach is critical in today's evolving AI landscape as it provides optimization, customization, and resilience, allowing businesses to select the best model for each specific task.
“AnswerRocket’s philosophy centers around composable AI solutions, enabling customers to have complete flexibility in their choice of cloud provider, LLMs, and models used within their AI assistants,” said Alon Goren, CEO of AnswerRocket. “This composability allows businesses to build AI solutions tailored to their specific requirements, ensuring they can leverage the best tools and technologies available in the market. We’re giving customers the power to balance model speed, cost, and capabilities as new models quickly arrive on the market.”
Flexible, Composable AI Solutions
AnswerRocket accelerates AI implementation by providing robust yet flexible solutions that allow:
- Model Selection: Select different LLMs for chat experiences, narrative composition, embeddings, and evaluating responses.
- Settings Adjustment: Customize LLM settings including token limits and up-to-date model cost figures.
- Model Mixing: Combine models within workflows to tackle complex data environments and data analysis use cases.
- Model Monitoring: Monitor LLM activity through open tools like LangSmith.
Responsible AI and the Use of Foundation Models
AnswerRocket supports proven foundation models to achieve safe and responsible AI. These models include built-in safeguards active by default to prevent biases and promote fairness. Customers can tailor these safeguards to align with their specific ethical standards, supporting the development of AI assistants that operate transparently and ethically on the AnswerRocket platform.
AnswerRocket’s responsible AI framework also features rigorous automated testing and evaluation. Each Skill or AI Assistant is validated for consistency, robustness, and alignment with ground truth data. By embedding these practices into the Software Development Life Cycle (SDLC), AnswerRocket minimizes the need for human oversight while maintaining high standards of reliability and accountability.
About AnswerRocket
Founded in 2013, AnswerRocket is a generative AI analytics platform for data exploration, analysis, and insights discovery. It allows enterprises to monitor key metrics, identify performance drivers, and detect critical issues within seconds. Users can chat with Max—an AI assistant for data analysis—to get narrative answers, insights, and visualizations on their proprietary data. Additionally, AnswerRocket empowers data science teams to operationalize their models throughout the enterprise. Companies like Anheuser-Busch InBev, Cereal Partners Worldwide, Suntory Global Spirits, and National Beverage Corporation depend on AnswerRocket to increase their speed to insights. For more information, visit www.answerrocket.com.