Tech Titans Collaborate to Oversee Safe ‘Frontier AI’ Development
Four of the most prominent players in the artificial intelligence realm have joined forces to establish the Frontier Model Forum, an industry body dedicated to overseeing the secure development of the most cutting-edge AI models. OpenAI, Anthropic, Microsoft, and Google (which owns DeepMind) are the driving forces behind this collaborative effort.
The primary objective of the Frontier Model Forum is to concentrate on the “safe and responsible” advancement of frontier AI models. These models represent AI technologies that surpass the current examples available, pushing the boundaries of AI innovation to new heights. By forming this consortium, the companies aim to ensure that the development of these advanced AI models adheres to strict safety and ethical standards.
“Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,” said Brad Smith, the president of Microsoft. “This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”
Frontier Model Forum
Anthropic, Google, Microsoft, and OpenAI have come together to introduce the Frontier Model Forum, an innovative industry body that guarantees the secure and ethical advancement of frontier AI models. The member companies aim to elevate the entire AI ecosystem by pooling their technical and operational skills. This will be achieved through the promotion of technical evaluations and benchmarks and the establishment of a comprehensive public library of solutions that bolster industry best practices and standardization efforts.
The consortium extends an open invitation to organizations actively developing frontier models. These models are characterized as large-scale machine-learning models surpassing the capabilities of even the most advanced existing models, exhibiting versatility across various tasks. By embracing diverse memberships, the forum strives to foster collaborative efforts and a collective commitment to AI technology’s responsible and beneficial evolution.
Core Objectives for the Forum
- Advance AI safety research for the responsible development of frontier models and risk reduction.
- Identify best practices in developing and deploying frontier models, promoting public understanding of AI’s capabilities and limitations.
- Collaborate with policymakers, academics, civil society, and companies to address trust and safety risks.
- Support AI applications addressing societal challenges like climate change, cancer detection, and cyber threats.
“We’re excited to work together with other leading companies, sharing technical expertise to promote responsible AI innovation. We’re all going to need to work together to make sure AI benefits everyone.” said Kent Walker, President, Global Affairs, Google & Alphabet.
A Framework of the Frontier Model Forum:
- The Frontier Model Forum will commence its operations by establishing an Advisory Board comprising individuals from diverse backgrounds and perspectives. This board will be crucial in guiding the forum’s strategy and priorities.
- The founding companies will also set up essential institutional structures, including a charter, governance framework, and funding arrangements. A working group and executive board will lead the forum’s efforts.
- In the upcoming weeks, the Frontier Model Forum plans to seek input from civil society and governments to shape the forum’s design and explore meaningful avenues for collaboration.
- Moreover, the forum aims to complement and contribute to existing government and multilateral initiatives, such as the G7 Hiroshima process, the OECD’s work on AI risks, standards, and social impact, and the US-EU Trade and Technology Council.
- In pursuing its goals, the Frontier Model Forum acknowledges and seeks to build upon the valuable contributions of existing industry, civil society, and research initiatives across various workstreams. Collaborative efforts with established projects like the Partnership on AI and MLCommons will be explored to support further and strengthen the broader AI community.
“Advanced AI technologies have the potential to profoundly benefit society, and the ability to achieve this potential requires oversight and governance. It is vital that AI companies–especially those working on the most powerful models–align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible. This is urgent work and this forum is well-positioned to act quickly to advance the state of AI safety” said, Anna Makanju, Vice President of Global Affairs, OpenAI.
President Joe Biden lauded Amazon, Google, Meta, Microsoft, and others for voluntarily committing to AI safeguards negotiated by the White House. The companies aim to ensure the safety of their AI products before release, including third-party oversight. Biden views these commitments as vital to managing AI’s potential and risks.
“We must be clear eyed and vigilant about the threats emerging technologies can pose,” Biden said, adding that the companies have a “fundamental obligation” to ensure their products are safe.
However, President Biden hinted at the possibility of implementing regulatory oversight in the future by stating, “Realizing the promise of AI by managing the risk is going to require some new laws, regulations, and oversight,” Biden said. “In the weeks ahead, I’m going to continue to take executive action to help America lead the way toward responsible innovation. And we’re going to work with both parties to develop appropriate legislation and regulation.”