Broadcast United

Next Steps for SB 1047: California Governor Newsom Has a Chance to Make AI History

Broadcast United News Desk
Next Steps for SB 1047: California Governor Newsom Has a Chance to Make AI History

[ad_1]

Supporters say it is a modest law that provides “clear, predictable, Common-sense safety standards” to replace artificial intelligence. Opponents say this is a dangerous and arrogant move that will “Stifling innovation

Anyway, SB 1047 — California Sen. Scott Weiner’s Proposal A bill to regulate advanced artificial intelligence models offered by companies doing business in the state has now passed the California Assembly. 48 to 16The bill passed the Senate in May with a 32-1 vote. Once the Senate agrees to the changes made by Congress, the bill will be Expected The measure will soon head to Gov. Gavin Newsom’s desk.

The bill, which would hold AI companies liable for catastrophic harm their “cutting-edge” models could cause, has the support of numerous AI safety groups as well as prominent figures in the field, such as Geoffrey Hinton, Joshua Bengioand Stuart Russellwho warn that the technology could pose a huge, even existential threat to humanity. It is supported by Elon Muskwho runs an artificial intelligence company in addition to his other businesses AI.

Almost all of the tech industry opposes SB 1047, including Open AI, Facebooka powerful investor Y Combinator and Andreessen Horowitzand some Academic Researchers Afraid of its threat Open Source AI model. Another AI giant, Anthropic Lobbying to water down the billThe company said the bill was passed in August after many of its proposed amendments were passed. “The benefits may outweigh the costs.”

Despite strong industry opposition, the bill appears to be popular with Californians, despite all the research on it being funded by the parties involved. A recent poll by the AI ​​Policy Institute, which supports the bill, found that It was found that 70% of residents In favor, with even higher support among those working in California’s tech sector. The California Chamber of Commerce commissioned a bill Most Californians are opposedBut the wording of the poll was at least biased, saying the bill would require developers to “pay tens of millions of dollars in fines if they don’t comply with orders from state officials.” The AI ​​Policy Institute poll made both pro and con arguments, but the California Chamber of Commerce made only a “no” argument.

The bill passed the House and Senate with broad bipartisan support, and the general public support (when asked in a non-biased manner) might suggest that Governor Newsom is likely to sign it. But it’s not that simple. Andreessen Horowitz, $43 billion venture capital gianthave Hired Newsom’s close friend and Democratic operative Jason Kinney to lobby Oppose the bill, as do some powerful Democrats, including Eight members of the U.S. House of Representatives From California, former speaker Nancy Pelosicalling for a veto, echoing Tech industry talking points.

Therefore, Newsom is likely to veto the bill, preventing California, the center of the artificial intelligence industry, from becoming the first state to formulate strict artificial intelligence liability rules. This is not only related to the artificial intelligence safety of California, but also to the artificial intelligence safety of the United States and even the world.

For attracting such intense lobbying, one might assume SB 1047 is an aggressive, overbearing bill — but, especially after several rounds of amendments in the state Legislature, the actual law doesn’t do much.

It will provide Whistleblower protection for skilled workerswhile also creating a process for those with confidential information about dangerous practices in AI labs to file complaints with the state attorney general without fear of prosecution. The bill also requires AI companies that spend more than $100 million to train AI models to have safety plans. (The requirement is capped at an extremely high level and is intended to protect California startups, which have opposed it, arguing that the compliance burden is too high for small companies.)

So what aspects of the bill might have triggered months of panic, intense lobbying by California’s business community, and unprecedented intervention by California’s federal representatives? Partly because the bill used to be more powerful. The initial version of the bill set the compliance threshold at $100 million for using a certain amount of computing power, meaning that more and more companies would be subject to the bill over time as computer prices continued to fall. It would also have established a state agency called the Frontier Model Unit to review security plans; industry objected to this apparent power grab.

Another part of the answer is that many people have been wrongly told that the bill does more than that. Falsely claim AI developers could be guilty of a felony regardless of whether they participated in a harmful incident, and the bill only provides for Knowingly committing perjury(These regulations were later deleted.) Representative Zoe Lofgren of the Congressional Science, Space, and Technology Committee wrote a letter The opposition falsely claimed the bill required compliance with guidelines that did not yet exist.

But standards do exist (You can read the full article here), but the bill does not require companies to comply. It only says that “developers should consider industry best practices and applicable guidance from the National AI Safety Institute, the National Institute of Standards and Technology, government-run agencies, and other reputable organizations.”

Unfortunately, much of the discussion of SB 1047 has focused on these straight-up false claims, which in many cases have been made by people who should know better.

The premise of SB 1047 is that in the near future, AI systems may become so powerful that Can be dangeroussome supervision is needed. This core proposition is very Controversial The divide among AI researchers is best exemplified by the three men often called the “godfathers of machine learning”: Turing Award winners Yoshua Bengio, Geoffrey Hinton, and Yann LeCun. Bengio is 2023 Future Perfection Award Winner — and Hinton have spent the last few years convinced that the technology they created could kill us all, and have advocated for regulation and oversight. Hinton Step down He left Google in 2023 and spoke publicly about his concerns.

Meta Chief AI Scientist LeCun took the opposite view, declaring that such concerns are nonsense and that any regulation would stifle innovation. Bengio and Hinton supported the bill, while LeCun opposed it, particularly the idea that AI companies should be held liable when using AI in mass casualty incidents.

In that sense, SB 1047 is at the center of a symbolic tug-of-war: Is the government serious about AI safety? The actual text of the bill may be limited, but it suggests the government is listening to half of the experts who believe that AI can be extremely dangerous. The implications are huge.

It’s that sentiment that has driven the most intense lobbying against the bill, from venture capitalists Marc Andreessen and Ben Horowitz, whose firm a16z has been working to block it, and prompted some to take the highly unusual step of contacting federal lawmakers to ask them to oppose a state bill. More mundane politics may also have played a role: According to Politico Pelosi opposed the bill because she is trying to secure tech venture funding for her daughter, who is expected to run for a House seat against Scott Wiener.)

Why SB 1047 is important

It may seem strange that legislation in just one US state has so many people anxious. But remember: California is no ordinary state. It is home to several of the world’s leading artificial intelligence companies.

What happens there is particularly important because at the federal level, lawmakers have been procrastination The process of regulating artificial intelligence. As Washington dithers and an election looms, states will pass new laws. If Newsom approves, California’s bill would be a big piece of the puzzle, pointing the way for the broader U.S.

The rest of the world is watching, too. “Countries around the world are looking at these drafts for ideas that might influence their AI legal decisions,” said Victoria Espinel, chief executive of the Business Software Alliance, a lobbying group representing major software companies. Tell The New York Times reported in June.

Even China—— Often referred to as the devil in American conversation On AI development (because “we don’t want to lose in the arms race with China”) — showing Concern for safetyRather than just trying to get ahead, bills like SB 1047 can show others that Americans care about safety, too.

Frankly, it’s refreshing to see lawmakers wise up to tech’s favorite tactic: claiming it can regulate itself. That argument may have been dominant in the social media age, but it’s increasingly untenable. We need to regulate Big Tech. And that means using not just carrots but sticks as well.

Newsom has a chance to do something historic. If he doesn’t, he’ll face some criticism. AI Policy Institute Poll shows that 60% of voters are prepared to blame him for future AI-related incidents if he vetoes SB 1047. In fact, they would punish him at the ballot box if he runs for higher office: 40% of California voters said they would be less likely to vote for him in a future presidential primary if Newsom vetoes the bill.

[ad_2]

Source link

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *