MLCommons and AI Verify unite to establish global AI safety standards
In a significant stride toward enhancing global AI safety, MLCommons and AI Verify have inked a memorandum of intent, pledging to develop universal safety testing benchmarks for generative AI models. This collaboration aims to craft a safety ecosystem that encompasses AI testing companies, national safety institutes, auditors, and researchers worldwide.
Details of the Agreement
Under the agreement, the focus will be on establishing a set of benchmarks that will serve as a globally recognized baseline for safety testing. This initiative is expected to benefit AI developers, integrators, purchasers, and policymakers by providing a standardized approach to evaluating the safety of generative AI technologies.
MLCommons President Weighs In on the Initiative
Peter Mattson, President of MLCommons and co-chair of the AI Safety working group, expressed enthusiasm about the global AI community’s commitment to this cause. “The MLCommons AI Verify collaboration is a step-forward towards creating a global and inclusive standard for AI safety testing, with benchmarks designed to address safety risks across diverse contexts, languages, cultures, and value systems,” Mattson stated.
Progress and Future Plans
The AI Safety working group has recently unveiled a v0.5 AI Safety benchmark proof of concept (POC), setting the stage for the development of interoperable AI testing tools by AI Verify. These tools will play a crucial role in the upcoming v1.0 release, expected this fall, which will include a toolkit for interactive testing to support benchmarking and red-teaming activities.
Call for Global Collaboration
Dr. Ong Chen Hui, Chair of the Governing Committee at AI Verify Foundation, highlighted the foundation’s eagerness to foster a universally accepted standard for AI safety benchmarks. “AI Verify Foundation is excited to partner with MLCommons to help our partners build trust in their models and applications across the diversity of cultural contexts and languages in which they were developed,” said Dr. Ong. He also extended an invitation to more partners to join this effort to promote responsible AI usage globally.
Join the Effort
The AI Safety working group is encouraging global participation to help shape the v1.0 AI Safety benchmark suite and beyond. Interested parties are invited to join the MLCommons AI Safety working group to contribute to this pivotal initiative.
About MLCommons and AI Verify Foundation
MLCommons is a leading open engineering consortium that launched with the MLPerf® benchmarks in 2018. With over 125 members, including global technology providers, academics, and researchers, MLCommons aims to enhance AI through collaborative engineering, benchmarks, and public datasets. AI Verify Foundation, a subsidiary of the Infocommunications Media Development Authority of Singapore (IMDA), focuses on developing AI testing tools and promoting standards for responsible AI through the global open-source community.
Discover more from Business-News-Today.com
Subscribe to get the latest posts sent to your email.