Gaithersburg, Maryland – Today, the U.S. Department of Commerce's National Institute of Standards and Technology (NIST) announced that its American Institute for Artificial Intelligence Safety has signed memoranda of understanding with leading AI companies Anthropic and OpenAI, officially initiating collaboration on AI safety research, testing, and evaluation.
Under the agreement, the American Institute for Artificial Intelligence Safety will gain access to these companies' major new models before and after their release, allowing for in-depth participation in assessing the capabilities and safety risks of these models and researching methods to mitigate those risks. This agreement establishes a framework for collaboration, marking a significant step forward in responsibly managing the future of AI in the U.S.
Image Source: Picture generated by AI, provided by Midjourney
"Safety is crucial for driving breakthrough technological innovation. With these agreements, we look forward to collaborating with Anthropic and OpenAI to advance the science of AI safety," said Elizabeth Kelly, Director of the American Institute for Artificial Intelligence Safety. These agreements are not only the beginning of cooperation but also significant milestones in achieving responsible development and use of AI.
Additionally, the American Institute for Artificial Intelligence Safety plans to work closely with UK partners to provide feedback on model safety improvements to Anthropic and OpenAI. NIST, building on its century of achievements in measurement science, technology, standards, and related tools, will further enhance the safety and reliability of AI systems through this collaboration.
These collaborations and evaluation efforts will help implement the Biden-Harris administration's executive order on AI and strengthen the voluntary commitments of major AI developers to the government, promoting the safe, reliable, and trustworthy development and use of AI.