Protect AI, an AI and machine learning (ML) security company, has successfully concluded its Series A funding round, raising an impressive $35 million. The round was funded by Evolution Equity Partners, with additional contributions from Salesforce Ventures and existing investors Acrew Capital, boldstart ventures, Knollwood Capital, and Pelion Ventures.
Protect AI was founded by Ian Swanson, former leader of Amazon Web Services’ worldwide AI and ML business. They aim to fortify ML systems and AI applications against security vulnerabilities, data breaches, and emerging threats. With the newly secured $35 million, Protect AI plans to scale its sales and marketing efforts, enhance go-to-market activities, invest in research and development, and strengthen customer success initiatives. This funding round brings the company’s total funding to $48.5 million, underscoring its growing prominence in the AI and ML security sector. As part of the funding deal, Richard Seewald, founder and managing partner at Evolution Equity Partners, will join the Protect AI board of directors.
Proactive AI/ML Threat Visibility
The escalating complexity of the AI/ML security landscape poses significant challenges for organizations in maintaining comprehensive inventories of assets and elements within their ML systems. The rapid expansion of supply chain assets, such as foundational models and third-party training datasets, further complicates the issue. These challenges expose companies to risks related to regulatory compliance, data manipulation, and model poisoning.
Protect AI has developed a security platform called AI Radar to address these challenges. The platform provides AI developers, ML engineers, and AppSec professionals real-time visibility, detection, and management capabilities for their ML environments.
AI Radar creates an immutable record called the “machine learning bill of materials (MLBOM),” which effectively tracks all components used in an ML model or AI application. The platform then employs continuous security checks to identify and remediate vulnerabilities swiftly.
Traditional security tools often lack the visibility to monitor dynamic ML systems and data workflows effectively. In contrast, AI Radar addresses this concern by incorporating continuously integrated security checks, safeguarding ML environments against active data leakages, model vulnerabilities, and other AI security risks. The platform uses integrated model scanning tools for LLMs and other ML inference workloads to detect security policy violations, model vulnerabilities, and malicious code injection attacks. Additionally, AI Radar can integrate with third-party AppSec and CI/CD orchestration tools and model robustness frameworks.
AI Radar’s visualization layer provides real-time insights into an ML system’s attack surface while automatically generating and updating a secure MLBOM, capturing any policy violations and changes made. This approach ensures comprehensive visibility and audibility in the AI/ML supply chain, maintaining immutable time-stamped records.
Future of Protect AI
Protect AI’s forward-thinking vision is evident as it plans to enhance AI Radar’s capabilities and expand research to identify and report additional critical vulnerabilities in the ML supply chain. Additionally, they plan to further invest in the company’s open-source projects NB Defense and Rebuff AI.
As AI and ML technologies continue to gain traction across industries, ensuring the security of AI systems becomes paramount. Protect AI is ready to take the lead in offering effective threat solutions, ensuring the security of the entire ML development process, and establishing itself as a trailblazing AI and ML security firm.