OpenAI, Google, others launch Frontier Model Forum, a new AI industry group to promote safe and responsible AI
Last Friday, seven leading artificial intelligence (AI) companies including OpenAI, Google’s Alphabet, and Meta pledged to take measures to enhance AI safety. The move was part of the effort to ensure greater accountability and reliability in the use of AI-generated materials. The companies, which also include Amazon.com, Anthropic, Inflection, and Microsoft, pledged to rigorously test their AI systems before releasing them to the public.
Fast forward a week later, four of the seven AI companies announced the launch of Frontier Model Forum, an industry body focused on ensuring the safe and responsible development of frontier AI models.”
“Today, Anthropic, Google, Microsoft and OpenAI are announcing the formation of the Frontier Model Forum, a new industry body focused on ensuring safe and responsible development of frontier AI models. The Frontier Model Forum will draw on the technical and operational expertise of its member companies to benefit the entire AI ecosystem, such as through advancing technical evaluations and benchmarks, and developing a public library of solutions to support industry best practices and standards.”
The consortium stressed that in the absence of new regulations from policymakers, the industry will have to take responsibility for self-regulation. Anthropic, Google, Microsoft, and OpenAI said the new Frontier Model Forum, has four primary objectives, which are outlined in Google’s blog post:
1. Advancing AI safety research to promote responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety.
2. Identifying best practices for the responsible development and deployment of frontier models, helping the public understand the nature, capabilities, limitations, and impact of the technology.
3. Collaborating with policymakers, academics, civil society and companies to share knowledge about trust and safety risks.
4. Supporting efforts to develop applications that can help meet society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.
Over the course of the next year, the Forum aims to actively contribute to the secure and responsible advancement of frontier AI models through its focus on three key areas:
- Identifying best practices: By fostering knowledge exchange and collaboration among various stakeholders, including industry, governments, civil society, and academia, the Forum will emphasize safety standards and practices to effectively address a wide array of potential risks associated with AI.
- Advancing AI safety research: The Forum is committed to supporting the AI safety ecosystem by pinpointing critical research inquiries in AI safety. This effort will encompass areas like adversarial robustness, mechanistic interpretability, scalable oversight, independent research access, emergent behaviors, and anomaly detection. An initial emphasis will be placed on creating and sharing a public library of technical evaluations and benchmarks for frontier AI models.
- Facilitating information sharing among companies and governments: With a primary focus on trust and security, the Forum will establish secure channels for companies, governments, and other relevant stakeholders to share valuable insights regarding AI safety and associated risks. The approach will follow the best practices of responsible disclosure adopted in fields like cybersecurity.
By channeling efforts into these areas, the Forum seeks to proactively address challenges and collectively promote a safer and more responsible landscape for the development of AI technology.
Commenting on the new group, Kent Walker, President, Global Affairs, Google & Alphabet said: “We’re excited to work together with other leading companies, sharing technical expertise to promote responsible AI innovation. We’re all going to need to work together to make sure AI benefits everyone.”
Brad Smith, Vice Chair & President, Microsoft said: “Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control. This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”
Anna Makanju, Vice President of Global Affairs, OpenAI said: “Advanced AI technologies have the potential to profoundly benefit society, and the ability to achieve this potential requires oversight and governance. It is vital that AI companies–especially those working on the most powerful models–align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible. This is urgent work and this forum is well-positioned to act quickly to advance the state of AI safety.”
Dario Amodei, CEO, Anthropic said: “Anthropic believes that AI has the potential to fundamentally change how the world works. We are excited to collaborate with industry, civil society, government, and academia to promote safe and responsible development of the technology. The Frontier Model Forum will play a vital role in coordinating best practices and sharing research on frontier AI safety.”