Broadcast United

OpenAI and Anthropic sign agreement with US AI Safety Institute

Broadcast United News Desk
OpenAI and Anthropic sign agreement with US AI Safety Institute

[ad_1]

OpenAI and Anthropic have signed agreements with the US government to provide their cutting-edge AI models for testing and safety research. NIST Announcement It was revealed on Thursday that the US AI Safety Institute will be given access to these technologies “before and after they are publicly released.”

Thanks to the two AI giants signing their own memorandum of understanding (a non-legally binding agreement), AISI can assess the capabilities of their models and identify and mitigate any safety risks.

this American Iron and Steel InstituteOfficially established by NIST in February 2024, focusing on Executive Order on Artificial Intelligence The report “Safe, Reliable, and Trustworthy Development and Use of Artificial Intelligence” was released in October 2023. These actions include the development of safety standards for AI systems. The group received AI Safety Research Institute AllianceIts members include Meta, OpenAI, NVIDIA, Google, Amazon, and Microsoft.

“Safety is critical to driving breakthrough technological innovation,” AISI Director Elizabeth Kelly said in the release. “With these agreements in place, we look forward to beginning our technical collaboration with Anthropic and OpenAI to advance the science of AI safety.”

“These agreements are just a start, but they are important milestones in our commitment to responsibly manage the future of AI.”

look: Generative AI defined: How it works, its benefits, and its risks

“Safe, trustworthy AI is critical to the positive impact of this technology,” Jack Clark, co-founder and head of policy at Anthropic, told TechRepublic via email. “Our work with the National AI Safety Institute leverages their extensive expertise to rigorously test our models before broad deployment.”

“This enhances our ability to identify and mitigate risks and advance the responsible development of AI. We are proud to contribute to this important work and set a new benchmark for safe and trustworthy AI.”

“We strongly support the mission of the National AI Safety Institute and look forward to working together to advance safety best practices and standards for AI models,” Jason Kwon, chief strategy officer at OpenAI, told TechRepublic via email.

“We believe the Institute has a critical role to play in establishing America’s leadership in the responsible development of AI and hope that our joint efforts can provide a framework that the rest of the world can learn from.”

AISI to collaborate with UK AI Safety Institute

AISI also plans to work with the UK AI Safety Institute to provide safety-related feedback to OpenAI and Anthropic. The two countries formally agreed to cooperate Develop security tests for AI models.

The agreement is a fulfilment of commitments made at the first global conference AI Safety Summit Last November, governments around the world accepted responsibility for safety testing of the next generation of artificial intelligence models.

Following Thursday’s announcement, Jack Clark, co-founder and head of policy at Anthropic Posted in X: “Third-party testing is a very important part of the AI ​​ecosystem, and it’s amazing to see governments setting up safety agencies to facilitate this.

“This collaboration with AISI in the US will build on the work we did earlier this year when we worked with AISI in the UK to conduct pre-deployment testing of Sonnet 3.5.”

Claude 3.5 Sonnet is Anthropic’s latest AI model, released in June.

Since ChatGPT was released, AI companies and regulators have clashed over whether AI needs to be strictly regulated, with the former pushing for measures to guard against risks such as false information and the latter arguing that overly strict rules could stifle innovation. Voluntary Framework Allow governments to oversee their AI technologies without strict regulatory requirements.

The US approach at the national level is more pro-industry, focusing on voluntary guidelines and collaboration with technology companies, e.g. An AI Bill of Rights and the Executive Order on AI. In contrast, the EU has taken a more stringent regulatory path towards AI. Artificial Intelligence Actestablishing legal requirements for transparency and risk management.

In a somewhat different take on AI regulation than the rest of the country, the California State Assembly on Wednesday Passed the Frontier Artificial Intelligence Model Safety Innovation Actalso known as SB-1047 or the California Artificial Intelligence Act. The next day, the bill was approved by the state Senate and now only needs the approval of Governor Gavin Newsom to be enacted into law.

Silicon Valley giants Open AI, Yuanand Google Both wrote to California lawmakers to express concerns about SB-1047, emphasizing the need for a more cautious approach so as not to hinder the development of artificial intelligence technology.

look: OpenAI, Microsoft, and Adobe support California’s AI watermarking bill

OpenAI CEO Sam Altman announced Thursday that his company had reached an agreement with the U.S. AISI. Posted in X He said it was “important to do this at the national level,” subtly mocking California’s SB-1047. Violations of state legislation carry penalties, unlike voluntary memoranda of understanding.

Meanwhile, the UK AI Safety Institute faces financial challenges

Since the transition of leadership from the Conservative Party to Labour in early July, the UK government has made a number of notable changes in its approach to artificial intelligence.

It has been abandoned The original office was to be established in San Francisco This summer, according to Reuters Sources said the move was aimed at cementing ties between the UK and the Bay Area’s AI giants. Technology Minister Peter Kyle also reportedly sacked Nitarshan Rajkumar, senior policy adviser and co-founder of UK AISI.

look: UK government cancels supercomputer funding, invests £32 million in artificial intelligence projects

Reuters sources added that Khairy plans to cut direct government investment in the industry. In fact, earlier this month, the government £1.3 billion worth of financing put on hold The fund is specifically earmarked for artificial intelligence and technological innovation.

In July, Chancellor Rachel Reeves said public spending was on track £22 billion over budget And immediately announced £5.5 billion of funding cuts, including cuts to the Investment Opportunities Fund which supports projects in the digital and technology sectors.

In the days before the Chancellor’s speech, Labour Appointment of technology entrepreneur Matt Clifford Develop an “AI Opportunity Action Plan” that will identify how AI can best be used at a national level to improve efficiency and reduce costs. His recommendations will be released in September.

According to Reuters, Clifford held a meeting with representatives of 10 well-known venture capital firms last week to discuss the plan, including how the government can use artificial intelligence to improve public services, support university spin-offs, and how to make it easier for start-ups to recruit internationally.

But things were far from calm behind the scenes, with one attendee telling Reuters they were “stressing they only had one month to reverse the censorship situation.”

[ad_2]

Source link

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *