Aug 29 (Reuters) - AI startups OpenAI and Anthropic have
signed deals with the United States government for research,
testing and evaluation of their artificial intelligence models,
the U.S. Artificial Intelligence Safety Institute said on
Thursday.
The first-of-their-kind agreements come at a time when the
companies are facing regulatory scrutiny over safe and ethical
use of AI technologies.
California legislators are set to vote on a bill as soon as
this week to broadly regulate how AI is developed and deployed
in the state.
Under the deals, the U.S. AI Safety Institute will have
access to major new models from both OpenAI and Anthropic prior
to and following their public release.
The agreements will also enable collaborative research to
evaluate capabilities of the AI models and risks associated with
them.
"We believe the institute has a critical role to play in
defining U.S. leadership in responsibly developing artificial
intelligence and hope that our work together offers a framework
that the rest of the world can build on," said Jason Kwon, chief
strategy officer at ChatGPT maker OpenAI.
Anthropic, which is backed by Amazon ( AMZN ) and Alphabet
, did not immediately respond to a Reuters request for
comment.
"These agreements are just the start, but they are an
important milestone as we work to help responsibly steward the
future of AI," said Elizabeth Kelly, director of the U.S. AI
Safety Institute.
The institute, a part of the U.S. commerce department's
National Institute of Standards and Technology (NIST), will also
collaborate with the U.K. AI Safety Institute and provide
feedback to the companies on potential safety improvements.
The U.S. AI Safety Institute was launched last year as part
of an executive order by President Joe Biden's administration to
evaluate known and emerging risks of artificial intelligence
models.