If I built a car that was far more dangerous than other cars, released it without any safety testing, and it ended up causing people to die, I would likely be held liable and would have to pay damages, if not criminal penalties.
If I created a search engine (unlike Google) whose first result for the search “how to commit mass murder” was detailed instructions on the best way to commit a serial murder, and someone used that search engine to follow those instructions, I would likely not be held liable, thanks to Section 230 of the Communications Decency Act of 1996.
So the question is: are AI assistants like cars, where manufacturers can be expected to perform safety testing and take responsibility if someone dies, or are they more like search engines?
That’s one question animating the tech industry’s fierce debate over California’s SB 1047, which would require companies that spend more than $100 million to train “cutting-edge models” of AI like the GPT-5 they’re developing to conduct safety tests or face liability if their AI systems cause a “mass casualty accident” or result in more than $500 million in damages in a single accident or a series of closely related accidents.
The general idea that AI developers should be held responsible for any harm caused by the technology they develop is overwhelmingly supported by the American public, and an earlier version of the bill, which was much stricter, passed the California Senate by a vote of 32 to 1. The bill has the backing of two of the world’s most highly cited AI researchers, Geoffrey Hinton and Yoshua Bengio.
Would holding the AI industry accountable destroy it?
But the bill has faced fierce criticism from many in the tech industry.
“Regulating foundational technology will put an end to innovation,” Yann LeCun, chief AI scientist at Meta, said in an X post denounced 1047. He shared other posts declaring it “potentially destroys California’s incredible history of tech innovation” and wondered aloud whether “SB-1047, coming to the California Assembly, will mean the end of California’s tech industry.” The CEO of HuggingFace, a leader in the AI open source community, called the bill “a major blow to both California and American innovation.”
These apocalyptic comments make me wonder… did we even read the same bill?
To be clear, to the extent that 1047 places an unnecessary burden on technology companies, and that burden is only placed on the companies that do $100 million training runs, which only the largest companies can afford, I think that’s an extremely bad outcome. It’s entirely possible that regulatory compliance will consume a disproportionate portion of people’s time and energy, discourage them from doing something different or complex, and focus energy on demonstrating compliance instead of where it’s most needed. We’ve seen this in other industries.
I don’t think 1047’s safety requirements are unnecessarily burdensome, but that’s because I agree with half of the machine learning researchers who think that powerful AI systems are likely to pose catastrophic dangers. If I agreed with half of the machine learning researchers who deny such risks, I would view 1047 as an unnecessarily burdensome and staunchly oppose it.
To be clear, the outlandish claims about 1047 make no sense, but there are also reasonable concerns: if you built a very powerful AI, tweaked it so that it was no longer useful for mass murder, but then released the model as open source so that people could undo the tweaks and use it to commit mass murder, under 1047’s liability provisions you would still be liable for the damage.
This would certainly prevent companies from publishing their models if they were powerful enough to cause a mass casualty event, or even if their creators believed they might be powerful enough to cause a mass casualty event.
The open source community is rightly worried that large companies will decide not to release anything as the legally safest option, although I think that a model so powerful that it would actually cause mass casualties probably shouldn’t do that. Even if released, it would surely be a loss for the world (and for the cause of making AI systems safe) if models without such capabilities were to be stalled out of legal over-vigilance.
The claim that 1047 will spell the end of California’s tech industry is sure to become outdated, and makes little sense on its face. Many of the posts criticizing the bill seem to assume that current U.S. law doesn’t make you liable for developing dangerous AI that causes mass casualties, but maybe you already are.
“Failing to take appropriate precautions to prevent other people from causing mass harm, for example by failing to provide proper safety measures for a dangerous product, do “There’s a huge liability risk!” Yale Law professor Ketan Ramakrishnan responded to a post by AI researcher Andrew Ng.
While 1047 more clearly defines what constitutes reasonable precautions, it does not invent new concepts in liability law. Even if the bill does not pass, companies should prepare to be sued if their AI assistants cause numerous deaths and injuries or hundreds of millions of dollars in damages.
Do you trust that your AI models are truly safe?
What’s also puzzling about LeCun and Ng’s claims is that they both say that AI systems are actually completely safe and that there’s no reason to worry about mass casualty scenarios in the first place.
“The reason I say I’m not worried about AI turning evil is for the same reason I’m not worried about overpopulation on Mars,” Ng famously said. LeCun says one of the main objections to 1047 is that it aims to address science fiction risks.
California doesn’t want to waste time trying to solve science fiction risks when the state has real problems. But if critics are right that AI safety concerns are nonsense, the mass casualty scenario will never happen, and a decade from now we’ll feel foolish for ever worrying that AI might cause mass casualties. It may be very embarrassing for the bill’s drafters, but it won’t kill all of California’s innovation.
So what is causing such fierce opposition? I think it’s because this bill is really a litmus test for the question of whether AI can be dangerous and should be regulated accordingly.
SB 1047 doesn’t actually require all that much, but it’s fundamentally based on the idea that AI systems could pose catastrophic dangers.
AI researchers are almost comically divided about whether that basic premise is correct: many serious, well-respected people with major contributions to the field say that catastrophe is unlikely, while many serious, well-respected people with major contributions to the field say that it is quite likely.
Bengio, Hinton, and LeCun have been called the three godfathers of AI, and they now represent a deep industry divide on whether to take AI’s catastrophic risks seriously. SB 1047 does take them seriously. That’s either the bill’s greatest strength or its biggest mistake. It’s not surprising that skeptic LeCun takes the “mistake” view, while Bengio and Hinton welcome the bill.
I’ve covered many scientific debates, but I’ve never come across one with so little agreement on the core question of whether truly powerful AI systems will soon be possible, and, if so, whether they would be dangerous.
Surveys repeatedly reveal that the field is nearly evenly split: With each new advancement in AI, senior industry leaders seem to continually reinforce their existing positions rather than change their minds.
But whether or not we consider powerful AI systems dangerous is a very important question. To get the policy response right, we need to more precisely measure what AI can do and better understand what harm scenarios most merit a policy response. Regardless of what SB 1047 concludes, I have a lot of respect for researchers who are trying to answer these questions, and I am very frustrated by researchers who try to treat these questions as closed questions.
A version of this story originally appeared in our Future Perfect newsletter, sign up here.
