IBM, Meta launch Open AI Alliance with over 50 founding members to advance ‘open, safe, responsible’ AI
In a major move aimed at promoting the responsible development of artificial intelligence (AI), tech giants IBM and Meta today announced the launch of a new AI Alliance in collaboration with other tech leaders to advance ‘open, safe, responsible’ AI.
The alliance, which comprises more than 50 founding members spanning various industries, academia, and research institutions, aims to foster collaboration and collective efforts to ensure that AI benefits society as a whole, the companies said.
Notable members of the alliance include AMD, Anyscale, CERN, Cerebras, Cleveland Clinic, Cornell University, Dartmouth, Dell Technologies, EPFL, ETH, Hugging Face, Imperial College London, Intel, INSAIT, Linux Foundation, MLCommons, MOC Alliance operated by Boston University and Harvard University, NASA, NSF, Oracle, Partnership on AI, Red Hat, Roadzen, ServiceNow, Sony Group, Stability AI, University of California Berkeley, University of Illinois, University of Notre Dame, The University of Tokyo, and Yale University.
The alliance, composed of leading organizations across industries, startups, academia, research, and government, says it “is focused on fostering an open community and enabling developers and researchers to accelerate responsible innovation in AI while ensuring scientific rigor, trust, safety, security, diversity, and economic competitiveness. By bringing together leading developers, scientists, academic institutions, companies, and other innovators, we will pool resources and knowledge to address safety concerns while providing a platform for sharing and developing solutions that fit the needs of researchers, developers, and adopters around the world.”
In its pursuit, the AI Alliance added it is charting a course to initiate or enhance projects with a set of clear objectives:
Firstly, the Alliance aims to develop and implement benchmarks and evaluation standards, along with tools and resources that facilitate the responsible development and utilization of AI systems on a global scale. This includes the creation of a curated catalog encompassing safety, security, and trust tools. The intention is not only to establish these resources but also to advocate and support their use within the developer community for both model and application development.
Secondly, the Alliance seeks to responsibly advance the ecosystem of open foundation models, incorporating diverse modalities. This includes the development of highly capable multilingual, multi-modal, and science models. The goal is to leverage these models to address significant societal challenges, such as those related to climate and education.
Additionally, the Alliance is committed to fostering a dynamic AI hardware accelerator ecosystem. This involves amplifying contributions and fostering the adoption of crucial enabling software technology.
Moreover, the Alliance has a global focus on AI skills building and exploratory research. Through engagement with the academic community, efforts are directed towards supporting researchers and students, encouraging them to learn and contribute to essential AI model and tool research projects.
To ensure informed public discourse and policymaking, the Alliance aims to develop educational content and resources. These resources will shed light on the benefits, risks, solutions, and the need for precise regulation in the realm of AI.
Finally, the Alliance is set to launch initiatives that promote the open development of AI in safe and beneficial ways. Accompanying these initiatives are events designed to explore AI use cases and showcase how Alliance members are employing open technology in AI responsibly and for the greater good.
You can read more here.