Gcore Enhances Everywhere Inference With Flexible Deployment Options, Including Cloud, On-Premise, and Hybrid

Everywhere Inference leverages Gcore’s extensive global network of over 180 points of presence, enabling real-time processing, instant deployment, and seamless performance across the globe (Graphic: Business Wire)

LUXEMBOURG--()--Gcore, the global edge AI, cloud, network, and security solutions provider, today announced a major update to Everywhere Inference, formerly known as Inference at the Edge. This update offers greater flexibility in AI inference deployments, delivering ultra-low latency experiences for AI applications. Everywhere Inference now supports multiple deployment options including on-premise, Gcore's cloud, public clouds, or a hybrid mix of these environments.

Gcore developed this update to its inference solution to address changing customer needs. With AI inference workloads growing rapidly, Gcore aims to empower businesses with flexible deployment options tailored to their individual requirements. Everywhere Inference leverages Gcore’s extensive global network of over 180 points of presence, enabling real-time processing, instant deployment, and seamless performance across the globe. Businesses can now deploy AI inference workloads across diverse environments while ensuring ultra-low latency by processing workloads closer to end users. It also enhances cost management and simplifies regulatory compliance across regions, offering a comprehensive and adaptable approach to modern AI challenges.

Seva Vayner, Product Director of Edge Cloud and Edge AI at Gcore, commented: “The update to Everywhere Inference marks a significant milestone in our commitment to enhancing the AI inference experience and addressing evolving customer needs. The flexibility and scalability of Everywhere Inference make it an ideal solution for businesses of all sizes, from startups to large enterprises.”

The new update enhances deployment flexibility by introducing smart routing, which automatically directs workloads to the nearest available compute resource. Additionally, Everywhere Inference now offers multi-tenancy for AI workloads, leveraging Gcore’s unique multi-tenancy capabilities to run multiple inference tasks simultaneously on existing infrastructure. This approach optimizes resource utilization for greater efficiency.

These new features address common challenges faced by businesses deploying AI inference. Balancing multiple cloud providers and on-premises systems for operations and compliance can be complex. The introduction of smart routing enables users to direct workloads to their preferred region, helping them stay compliant with local data regulations and industry standards. Data security is another key concern and with Gcore’s new flexible deployment options, businesses can securely isolate sensitive information on-premise, enhancing data protection.

Learn more at https://gcore.com/everywhere-inference.

About Gcore

Gcore is a global edge AI, cloud, network, and security solutions provider. Headquartered in Luxembourg, with a team of 600 operating from ten offices worldwide, Gcore provides solutions to global leaders in numerous industries. Gcore manages its global IT infrastructure across six continents, with one of the best network performances in Europe, Africa, and LATAM due to the average response time of 30 ms worldwide. Gcore’s network consists of 180 points of presence worldwide in reliable Tier IV and Tier III data centers, with a total network capacity exceeding 200 Tbps. Learn more at gcore.com and follow them on LinkedIn, Twitter, and Facebook.

Contacts

Gcore press contact
pr@gcore.com

PR agency contact
gcore@aspectusgroup.com

Release Summary

Gcore enhances Everywhere Inference, leveraging its global network, with flexible deployment options, including public clouds, on-premise, and hybrid.

Contacts

Gcore press contact
pr@gcore.com

PR agency contact
gcore@aspectusgroup.com