General

Feds to get early access to OpenAI, Anthropic AI to test for doomsday scenarios

Enlarge (credit: Kilito Chan | Moment)

OpenAI and Anthropic have each signed unprecedented deals granting the US government early access to conduct safety testing on the companies’ flashiest new AI models before they’re released to the public.

According to a press release from the National Institute of Standards and Technology (NIST), the deal creates a “formal collaboration on AI safety research, testing, and evaluation with both Anthropic and OpenAI” and the US Artificial Intelligence Safety Institute.

Through the deal, the US AI Safety Institute will “receive access to major new models from each company prior to and following their public release.” This will ensure that public safety won’t depend exclusively on how the companies “evaluate capabilities and safety risks, as well as methods to mitigate those risks,” NIST said, but also on collaborative research with the US government.

Read 14 remaining paragraphs | Comments

Shares:

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *