Broadcast United

California’s new AI safety bill: Why tech giants worry about liability and innovation

Broadcast United News Desk
California’s new AI safety bill: Why tech giants worry about liability and innovation

[ad_1]

If I built a car that was significantly more dangerous than other cars, released it without any safety testing, and people died, I could be held responsible and have to pay damages or even face criminal penalties.

If I built a search engine (unlike Google) where the first result for a search for “how can I commit mass murder” would be detailed instructions on how best to commit serial killings, and someone used my search engine and followed the instructions, I would most likely not be held accountable, largely due to Section 230 of the Communications Decency Act of 1996.

So the question is: Are AI assistants more like cars, where we can require manufacturers to perform safety tests or be held liable if someone dies? Or are they more like search engines?

This is the current scientific and technological community California SB 1047Newly passed legislation requires security training for companies that spend more than $100 million to train AI “cutting-edge models” like the GPT-5 under development. Otherwise, they’ll be held liable if their AI systems cause a “mass casualty incident” or more than $500 million in damages in a single incident or a series of closely related incidents.

The general concept that AI developers should be held accountable for the harm caused by the technology they create is Very popular And won the favor of the American public. endorsement Geoffrey Hinton and Yoshua Bengio are The world’s most cited AI researchersEven Elon Musk Monday night, the voices of support rang out; He said that while “this is a difficult decision and will make some people uncomfortable,” the state should pass the bill to regulate AI “just as we regulate any product/technology that poses a potential risk to the public.”

The revised version of the bill is less stringent than its original version. Previous iterationpassed the state House on Wednesday by a vote of 41 to 9. Revision It would eliminate criminal penalties for perjury, create new barriers to protect startups’ ability to tweak open-source AI models, and narrow (but not eliminate) pre-injury enforcement. To become state law, it would next require the signature of Gov. Gavin Newsom.

“SB 1047 — our AI safety bill — just passed the Legislature,” state Senator Scott Wiener wrote on X. “I’m proud of the diverse coalition behind this bill — a coalition that believes deeply in innovation and safety. AI has the potential to make the world a better place.”

Will holding the AI ​​industry accountable destroy it?

However, much of the tech community fiercely criticized the bill.

Yann LeCun, chief AI scientist at Meta, said: “Regulating foundational technologies will stifle innovation.” In a post condemning 1047, X wrote. he shared Other posts declared that “this could destroy California’s illustrious history of technological innovation.” Wondering out loud“Does the SB-1047 bill passed by the California State Assembly mean the end of California’s tech industry?” said HuggingFace, CEO of the leader of the AI ​​open source community. called The bill “deals a massive blow to innovation in both California and the United States.”

These apocalyptic comments make me wonder…are we reading the same bill?

To be clear, insofar as 1047 places an unnecessary burden on tech companies, I do think that is a very bad outcome, although those burdens will only fall on companies that do $100 million in training, which is only possible for the largest companies. It is entirely possible—we’ve seen it happen. Other Industries — Regulatory compliance takes up too much of people’s time and energy, discourages people from doing anything different or complex, and focuses energy on proving compliance rather than where it’s needed most.

I don’t think the security requirements in 1047 are overly onerous, but that’s because I agree with half of the machine learning researchers who think Powerful AI systems could pose catastrophic risksIf I agreed with half of the machine learning researchers who dismiss such risks, I would consider 1047 a pointless burden and I would oppose it firmly.

Sign up here Explore the big, complex issues facing the world and the most effective ways to solve them. Delivered twice a week.

To be clear, while there is no truth to the absurd claims about 1047, there are some legitimate concerns. If you build an extremely powerful AI, tweak it so that it cannot help commit mass murder, but then open source the model so that people can undo the tweaks, and then use it for mass murder, you will still be held liable for the harm caused under 1047’s liability language.

This would certainly prevent companies from publicly releasing models powerful enough to cause mass casualty incidents, or even from their creators deeming them powerful enough to cause mass casualty incidents.

The open source community is understandably concerned that large companies will decide that the legally safest option is to never release anything. While I think any model powerful enough to cause a mass casualty event should probably not be released, it would certainly be a loss for the world (and for the cause of making AI systems safe) if models without that capability got caught up in an excess of legal caution.

Claims that 1047 will end the tech industry in California are doomed to fail and don’t even make sense on the surface. Many of the posts condemning the bill seem to assume that under current US law, if you develop a dangerous AI and cause a mass casualty incident, you are not liable. But you probably already are.

“If you fail to take reasonable precautions to prevent mass harm to others, such as by failing to install reasonable safeguards in your dangerous product, you Do Take a lot of responsibility! ” Ketan Ramakrishnan, professor of law at Yale University Response For such a postal By Andrew Ng, AI researcher

1047 more clearly defines what constitutes reasonable precautions, but it does not invent some new concept of liability law. Even if it doesn’t pass, companies will certainly be sued if an AI assistant causes a mass casualty incident or hundreds of millions of dollars in damages.

Do you really believe your AI models are safe?

Another confusing thing about LeCun and Ng’s advocacy is that both say AI systems are, in fact, completely safe and there’s no reason to worry about mass casualty scenarios at all.

“I say I’m not worried about AI turning evil for the same reason I’m not worried about overpopulation on Mars,” Ng said. As the saying goes. explain One of his main objections to 1047 is that the bill is designed to address science fiction risks.

I certainly don’t want California to spend its time addressing science fiction risks, especially when California has A very realistic problemBut if critics say AI safety concerns are nonsense, mass casualties won’t happen, and 10 years from now we’ll all feel silly worrying about AI causing mass casualties. That might be deeply embarrassing for the bill’s authors, but it won’t lead to the death of all innovation in California.

So what has led to such a strong backlash? I think the bill has become a litmus test for the question of whether AI is dangerous and worthy of regulation.

SB 1047 doesn’t actually ask for that much, but it’s fundamentally based on the idea that AI systems can be catastrophically dangerous.

It’s almost ironic how widely AI researchers disagree about whether this basic premise is correct. Many serious, respected people who have made important contributions to the field say that the disaster won’t happen. And many other serious, respected people who have made important contributions to the field say that the chances of a disaster happening are quite high.

Bengio, Hinton, and LeCun Known as The three godfathers of AI have now become symbols of the industry’s deep divisions over whether to take catastrophic AI risks seriously. SB 1047 takes them seriously. That’s either its greatest strength or its biggest mistake. Not surprisingly, LeCun, a staunch skeptic, took the “wrong” view, while Bengio and Hinton welcomed the bill.

I have already introduced There is a lot of scientific controversyAnd I’ve never encountered such a lack of consensus on its core question, which is whether to expect truly powerful AI systems to be possible soon—and, if so, whether it’s dangerous.

Surveys repeatedly find that the field is nearly split in half, with senior industry leaders seemingly doubling down on existing positions rather than changing their minds with each new advance in AI.

But whether you think powerful AI systems are dangerous or not, this matters a lot. To make the right policies, we need to get better at measuring what AI can do and better understand which harm scenarios are most worthy of policy responses. I have great respect for researchers who are trying to answer these questions — but I am very frustrated by those who try to treat them as closed questions.

Update: August 28, 7:45 p.m. ET: This article was originally published on June 19 and has been updated to reflect the passage of SB 1047 by the California Legislature.

[ad_2]

Source link

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *