LAS VEGAS--(BUSINESS WIRE)--Protect AI, the leading artificial intelligence (AI) and machine learning (ML) security company, today announced the launch of huntr, a groundbreaking AI/ML bug bounty platform focused exclusively on protecting AI/ML open-source software (OSS), foundational models, and ML Systems. The company is a silver sponsor at Black Hat USA, Booth 2610.
The launch of the huntr AI/ML bug bounty platform comes as a result of the acquisition of huntr.dev by Protect AI. Originally founded in 2020 by 418Sec Founder, Adam Nygate, huntr.dev quickly rose to become the world's 5th largest Certified Naming Authority (CNA) for Common Vulnerabilities and Exposures (CVEs) in 2022. With a vast network of over ten-thousand security researchers specializing in open-source software (OSS), huntr has been at the forefront of OSS security research and development. This success provides an opportunity for Protect AI to focus this platform on a critical and emerging need for AI/ML threat research.
In today's AI-powered world, nearly 80% of code in Big Data, AI, BI, and ML codebases relies on open-source components, according to Synopsys, with more than 40% of these codebases harboring high-risk vulnerabilities. In one example, Protect AI researchers found a critical Local File Inclusion/Remote File Inclusion vulnerability in MLflow, a widely used system for managing machine learning life cycles, which could enable attackers to gain full access to a cloud account, steal proprietary data, and expose critical IP in the form of ML models.
Furthermore, there is a critical lack of AI/ML skills and expertise in the field of security research that are able to find these AI security threats. This has led to an urgent need for comprehensive AI/ML security research, with the focus on uncovering potential security flaws and safeguarding sensitive data and AI application integrity for enterprises.
“The vast artificial intelligence and machine learning supply chain is a leading area of risk for enterprises deploying AI capabilities. Yet, the intersection of security and AI remains underinvested. With huntr, we will foster an active community of security researchers, to meet the demand for discovering vulnerabilities within these models and systems,” said Ian Swanson, CEO of Protect AI.
“With this acquisition by Protect AI, huntr's mission now exclusively centers on discovering and addressing OSS AI/ML vulnerabilities, promoting trust, data security, and responsible AI/ML deployment. We're thrilled to expand our reward system for researchers and hackers within our community and beyond,” said Adam Nygate, founder and CEO of huntr.dev.
The New huntr Platform
huntr offers security researchers a comprehensive AI/ML bug hunting environment with intuitive navigation, targeted bug bounties with streamlined reporting, monthly contests, collaboration tools, vulnerability reviews, and the highest paying AI/ML bounties available to the hacking community. The first contest is focused on Hugging Face Transformers offering an impressive $50,000 reward.
huntr also bridges the critical knowledge gap in AI/ML security research and operates as an integral part of Protect AI’s Machine Learning Security Operations (MLSecOps) community. By actively participating in huntr's AI/ML open-source-focused bug bounty platform, security researchers can build new expertise in AI/ML security, create new professional opportunities, and receive well-deserved financial rewards.
"AI and ML rely on open source software, but security research in these systems is often overlooked. huntr's launch for AI/ML security research is an exciting moment to unite and empower hackers in safeguarding the future of AI and ML from emerging threats," said Phil Wylie, a renowned Pentester.
Chloé Messdaghi, Head of Threat Research at Protect AI, emphasized the platform's ethos, stating, “We believe in transparency and fair compensation. Our mission is to cut through the noise and provide huntrs with a platform that recognizes their contributions, rewards their expertise, and fosters a community of collaboration and knowledge sharing.”
Protect AI is a Skynet sponsor at DEF CON’s AI Village, where Ms. Messdaghi will be chair of a panel entitled, “Unveiling the Secrets: Breaking into AI/ML Security Bug Bounty Hunting,” on Friday, August 11, at 4:00pm. The company is also a silver sponsor at Black Hat USA. These events will provide the opportunity for Protect AI’s threat research team to connect in person with the security research community. To find out more, and become an AI/ML huntr, join the community at huntr.mlsecops.com. For information on participating in Protect AI’s sessions at Black Hat and DEF CON visit us on LinkedIn and Twitter.
About Protect AI
Protect AI enables safer AI applications by providing organizations the ability to see, know and manage their ML environments. The company's AI Radar platform provides visibility into the ML attack surface by creating a ML Bill of Materials (MLBOM), remediates security vulnerabilities and detects threats to prevent data and secrets leakages. Founded by AI leaders from Amazon and Oracle, Protect AI is funded by Acrew Capital, boldstart ventures, Evolution Equity Partners, Knollwood Capital, Pelion Ventures and Salesforce Ventures. The company is headquartered in Seattle, with offices in Dallas and Raleigh. For more information visit us on the web, and follow us on LinkedIn and X/Twitter.