
[ad_1]
Artificial intelligence startups OpenAI and Anthropic have signed agreements with the U.S. government to conduct research, testing and evaluation of their artificial intelligence models, the AI Safety Institute said Thursday.
The first-of-its-kind agreements come as the companies face regulatory scrutiny over the safe and ethical use of artificial intelligence technology.
California lawmakers will vote as soon as this week on a bill to broadly regulate how artificial intelligence is developed and deployed in the state.
Under the agreement, the AI Safety Institute will have access to major new models from OpenAI and Anthropic before and after they are publicly released.
The agreement will also promote collaborative research to assess the capabilities of AI models and their associated risks.
“We believe the institute plays a critical role in establishing America’s leadership in the responsible development of AI, and hope that our joint efforts can provide a framework that the rest of the world can learn from,” said Jason Kwon, chief strategy officer at OpenAI, the maker of ChatGPT.
Anthropic, which is backed by Amazon and Alphabet, has not yet responded to Reuters’ request for comment.
“These agreements are just a start, but they are important milestones in our commitment to responsible stewardship of the future of AI,” said Elizabeth Kelly, director of the AI Safety Institute.
The institute, which is part of the U.S. Commerce Department’s National Institute of Standards and Technology (NIST), will also collaborate with the U.K.’s AI Safety Institute and provide feedback to companies on potential security improvements.
The AI Safety Institute was established last year by an executive order from President Joe Biden’s administration to assess known and emerging risks of AI models.
[ad_2]
Source link