
[ad_1]
California’s Appropriations Committee on Thursday passed the Frontier Artificial Intelligence Model Safety Innovation Act, also known as SB-1047, the latest move in a Silicon Valley regulatory saga.
The bill must be approved by the state House and Senate before it becomes law.
What is SB-1047?
The bill, commonly known as the California AI Act, is being closely watched nationwide and could set a precedent for states to develop guidelines for generative AI. SB-1047 Several rules have been laid out for AI developers:
- Create security protocols for covered AI models.
- Make sure such models can be shut down cleanly.
- Prevent the dissemination of models that are capable of causing “serious harm” as defined in the Act.
- Auditors are hired to ensure compliance with the Act.
In short, the Act provides a framework to prevent Generative AI Models Avoid massive destruction of humanity due to nuclear war or biological weapons, or losses exceeding $500 million due to cybersecurity incidents.
The bill defines a “covered model” as one that uses computing power greater than 10^26 integer or floating point operations during training and costs more than $100 million.
The latest version of the bill incorporates the opinions of Anthropic
The version of the bill passed Thursday includes some changes suggested by AI maker Anthropic and accepted by the bill’s lead author, Sen. Scott Wiener, D-Calif.
Anthropic successfully asked the state to remove language from the bill that would have made it possible for companies that violated it to be subject to legal action by the state attorney general. The latest version also removes the need for companies to disclose safety test results under threat of perjury. Instead, developers will need to submit declarations, which don’t have the same legal force.
Other changes include:
- The wording changes from the “reasonable assurance” that AI companies must provide safety to “reasonable care.”
- The exception is that an AI researcher who spends less than $10 million fine-tuning an open source coverage model is not considered the developer of that model.
See: Anthropic and OpenAI have Dig it yourself How generative AI can create content, including biased content.
The bill no longer requires the creation of a Frontier Modeling Unit (an agency charged with regulating the AI industry). Instead, a Frontier Modeling Commission focused on forward-looking safety guidance and audits would be housed within current government-run agencies.
While Anthropic contributed to the bill, other large organizations such as Google and Meta have opposed it. Andreessen Horowitz is a venture capital firm called a16z, which is behind many artificial intelligence startups. Strong opposition SB-1047.
Why is SB-1047 controversial?
Some industry and congressional representatives said the bill would limit innovation and make it particularly difficult to use open source AI models. One of the bill’s critics is Hugging Face co-founder and CEO Clement De Langeas Fast Company.
A study in April by a think tank that supports regulation Artificial Intelligence Policy Institute It found that a majority of Californians voted in favor of the bill at the time, with 70% agreeing that “powerful AI models could be used for dangerous purposes in the future.”
Researchers Geoffrey Hinton and Yoshua Bengio, known as the “godfathers of artificial intelligence” for their groundbreaking work in deep learning, have also publicly supported the bill. and columns Published in Fortune magazine on August 15.
Eight of California’s 52 congressmen signed A letter On Thursday, the group said the bill would “create unnecessary risks to California’s economy with little benefit to public safety.” They argue it’s too early to create standardized assessments for artificial intelligence because government agencies such as NIST are still working to develop those standards.
They also said the definition of serious harm could be misleading. The bill goes off track by focusing on large-scale disasters such as nuclear weapons, while “largely ignoring the obvious risks posed by AI, such as misinformation, discrimination, involuntary Deep fakesenvironmental impacts, and loss of workforce,” the MPs claimed.
SB-1047 includes specific protections for whistleblowers at AI companies under California’s Whistleblower Protection Act.
Alla Valente, a senior analyst at Forrester, said lawmakers are right to focus on cyberattacks, such as the Change Healthcare incident in May, because they have been shown to cause serious harm. “Through the use of generative AI, these attacks can be carried out more effectively and at a larger scale, making regulating AI something that all states must consider as part of protecting and serving their residents,” she said.
The bill presents the challenge of balancing regulation and innovation
“We can advance innovation and safety simultaneously; the two are not mutually exclusive,” Weiner wrote in a Public Statement on August 15. “While these modifications do not fully reflect the changes requested by Anthropic, a global leader in innovation and safety, we have accepted many of the very reasonable modifications proposed, and I believe we have addressed the core concerns expressed by Anthropic and many others in the industry.”
He noted that Congress is “at an impasse” on AI regulation, so “California must take action to address the foreseeable risks posed by rapidly evolving AI while also promoting innovation.”
Next, the bill needs to pass the House and Senate. If approved, it will be considered by Gov. Gavin Newsom, likely at a later date. August.
“Organizations are already having to grapple with the risks of generative AI as they develop and enforce existing and emerging AI laws,” said Valente. “At the same time, the growing AI litigation landscape is forcing organizations to prioritize their organization’s AI governance to ensure they address potential legal liabilities. SB 1047 will create guardrails and standards for AI products, which will increase confidence in genAI-enabled products and potentially accelerate adoption.”
[ad_2]
Source link