
[ad_1]
California Senator Scott Wiener (D-San Francisco) his ruthless bill exist housing and Public Safetya legislative record that makes him one of the most popular lawmakers in the tech industry.
But his proposed “Frontier Artificial Intelligence Model Safety Innovation” bill, Also known as SB 1047Requiring companies to train “cutting-edge models,” spend more than $100 million on security testing, and be able to shut down their models in the event of a security incident has drawn the ire of the same industry, venture capital giants Andreessen-Horowitz and Y Combinator. Public condemnation bill.
I spoke with Weiner this week about SB 1047 and its critics; our conversation is below (condensed for brevity).
Kelsey Piper: I want to raise with you the questions I’ve heard about SB 1047 and give you an opportunity to respond. One type of question that I think is of concern here is that the bill would prohibit the public use of models or making them available for public use if the model would create an unreasonable risk of serious harm.
What is an unreasonable risk? Who decides what is a reasonable risk? Many people in Silicon Valley are skeptical of regulation, so they don’t trust that discretion will be used and not abused.
Senator Scott Wiener: To me, SB 1047 is a light-touch bill in many ways. It’s a serious bill, and it’s a big bill. I think it’s an impactful bill, but it’s not hardcore. It doesn’t require a license. Some people, including some CEOs, I have said There should be a licensing requirement. I disagree.
some people think Strict liabilityThat’s the rule for most product liability. I disagree.[AI companies]don’t have to get permission from agencies to release[models]. They have to do safety testing, which they all say they’re currently doing or planning to do. If safety testing uncovers significant risks — and we define those as catastrophic — then you have to take mitigation steps. Not eliminate the risk, but try to reduce the risk.
There is already a legal standard where if a developer publishes a model and then that model ends up being used in a way that harms someone or something then you can be sued, and that’s potentially a negligence standard about whether you acted reasonably. It’s much broader than the liability we have in the bill. In the bill, only the attorney general can sue, whereas under tort law anyone can sue. Model developers already have a much broader potential liability than that.
Yes, I see some opposition to the bill who seem to have misunderstandings about tort law, such as people explain“It’s like holding the engine manufacturer responsible for a car accident.”
It is true. If someone crashes their car and the engine design was a contributing factor to the crash, the engine manufacturer can be sued. They have to be proven to have been negligent.
I’ve discussed this with startup founders, venture capitalists, and people at large tech companies, and I’ve never heard anyone dispute the reality that the responsibilities that exist today are much broader than they are today.
We do hear conflicting voices. Some people be opposed to They say, “This is all science fiction, anyone who’s concerned about safety is part of a cult, it’s not real, the capabilities are too limited.” Of course it’s not true. These are powerful models with huge potential to make the world a better place. I’m very excited about AI. I’m not pessimistic at all. And then they say, “There’s no way we can be held responsible if these disasters happen.”
Another challenge The downside to this bill is that open source developers, who have benefited greatly from Meta’s launch of Llama (generous licensed, sometimes referred to as open source AI models), are worried that this bill will make Meta understandably reluctant to release products in the future out of concern for liability. Of course, if a model is truly dangerous, no one wants it released. But the worry is that these concerns might make the company too conservative.
When it comes to open source, including but not limited to Llama, I take criticism from the open source community very, very seriously. We have engaged with people in the open source community and made changes directly to the open source community.
The shutdown provision requirement (a provision in the bill that requires model developers to have the ability to implement a full shutdown of a covered model so that it can be “unplugged” if conditions deteriorate) is one of the areas of growing concern.
We made a revision to make it clear that once the model is out of your hands, you have no responsibility to close it. The open sourcers who open sourced the model have no responsibility to close it.
And then another thing we do is issue corrections to people who make minor tweaks. If you make more than minimal changes to the model, or if you make major changes to the model, at some point it actually becomes a new model and the original developer is no longer responsible. There are some other smaller corrections, but these are the major ones that we make in direct response to the open source community.
Another question I’ve heard is: Why are you focusing on this issue alone and not all of California’s more pressing issues?
When you work on any issue, you hear people say, “Don’t you have more important things to do?” Yes, I am working Keep talking about housing.I am committed to Mental Health and Addiction Treatment.I am constantly working on Public Safety. I have a Auto Theft Act and a An act concerning the sale of stolen goods in the streetsI’m also drafting a bill to ensure that we both promote innovation in AI and do so responsibly.
As a policymaker, I have always been very pro-tech. I support our tech environment, even though it is often attacked. Supported California’s net neutrality law is designed to promote an open and free internet.
But I also see that in technology, we sometimes fail to solve very obvious problems. We saw this with data privacy. We finally got a data privacy law in California — and the opponents were on record saying the same thing, that it would destroy innovation and no one would want to work there.
My goal is to create lots of room for innovation while also facilitating responsible deployment, training, and distribution of these models. Some people argue that this will stifle innovation and drive companies out of California—again, we hear that in almost every bill. But I think it’s important to understand that this bill applies not only to people who develop models in California, but to everyone who does business in California. So you can stay in Miami, but unless you’re leaving California—which you’re not—you have to do that.
I want to talk about one of the interesting elements of the debate over this bill, which is how popular it is everywhere except Silicon Valley. passed The state Senate passed the bill 32-1, with bipartisan approval. 77% of Californians approve A poll showed that more than half strongly approve.
But the people who hate it are all in San Francisco. How did this end up on your bill?
In some ways, I was the best author of this bill for San Francisco because I was surrounded by AI and immersed in it. The genesis of this bill was that I started talking to a bunch of frontline AI technologists, startup founders. That was in early 2023, and I started hosting a series of salons and dinners with AI people. Some of these ideas started to take shape. So in some ways, I was the best author of this bill because I had access to incredibly brilliant people in the tech world. In another way, I was the worst author because some people in San Francisco were unhappy with me.
One of the challenges I face as a journalist is how to get the message across to people who aren’t in San Francisco and aren’t participating in these conversations: AI is a very big, high-stakes thing.
It’s very exciting. Because when you start trying to imagine – could we cure cancer? Could we find treatments that are very effective against multiple viruses? Could we have breakthroughs in clean energy that have never been done before? There are so many exciting possibilities.
But every powerful technology comes with risks.[This bill]is not about eliminating risk. Life is full of risk. But how do we make sure that at least we keep our eyes open? We understand the risks, and if there are ways to reduce them, we try.
That’s all we’re asking for in this bill, and I think the vast majority of people will support that.
A version of this article originally appeared on Future Perfect communication. Register Here!
[ad_2]
Source link