Meta has recently launched the Purple Llama initiative, marking a significant step forward in generative AI. This shows Meta is committed to enhancing the safety and reliability of AI systems through a comprehensive security framework.
The Purple Llama initiative combines offensive (red team) and defensive (blue team) strategies. Inspired by the cybersecurity of purple teaming, it aims to evaluate, identify, reduce, or eliminate potential risks in AI systems.
IBM recently partnered with Meta. They launched the AI alliance. It includes leading organizations, startups, academia, Research, and government bodies. Their goal is to foster open innovation and support open science in AI.
Enhancing developer capabilities, improving safety, and creating an open environment.
With so many tools launching based on AI, it was a good step from Meta. They want to foster collaboration on AI safety and build trust in these emerging technologies.
Meta has released several tools and frameworks for the Purple Llama initiative. These include CyberSec Eval, a comprehensive set of cybersecurity safety evaluation benchmarks for large language models (LLMs), and Llama Guard, an input/output safety classifier.
They also published its Responsible Use Guide, offering best practices for implementing the framework.
Meta has also united a lot of competitors in the tech industry to collaborate on the project. This includes companies like AMD, AWS, Google Cloud, Hugging Face, IBM, Intel, Lightning AI, Microsoft, MLCommons, NVIDIA, and Scale AI. This will improve and allow the sharing of the tools with the open-source community. In addition, it also signifies a new era in safe, responsible gen AI development.
Meta plans to host a workshop at NeurIPS 2023 to showcase a deep dive to leaders and the open-source community to help them get started. This is a step that is needed to push security in AI.