This is not California Sen. Scott Winner’s first attempt to address the dangers of AI.
In 2024, Silicon Valley launched a fierce campaign against SB 1047, the controversial AI safety bill. Technology leaders warned that they would curb the American AI boom. Gov. Gavin Newsom ultimately rejected the bill, reflecting similar concerns, and the popular AI hackerhouse quickly threw the “SB 1047 Veto Party.” One attendee told me, “Thank God, I’m still legal.”
Wiener is back with a new AI safety invoice Sb 53 on Governor Newsom’s desk, waiting for his signature or veto in the coming weeks. This time, the bill is much more popular. At least, Silicon Valley doesn’t seem to be fighting that.
Full human approval approved SB 53 earlier this month. Meta spokesman Jim Cullinan told TechCrunch that the company supports AI regulations that balance guardrails and innovation, “SB 53 is a step in that direction,” but there is an area of improvement.
Deanball, a former White House AI policy advisor, told TechCrunch that SB 53 is a “win for a reasonable voice,” and believes it is likely that Governor News Mom will sign it.
SB 53 imposes some of the country’s first safety reporting requirements on AI giants such as Openai, Anthropic, Xai and Google. Many AI Labs voluntarily publish safety reports explaining how to use AI models to create Bioweapons and other hazards, but they do this freely and are not always consistent.
The bill calls for major AI labs, particularly those that generate revenues of more than $500 million, to publish safety reports for the most capable AI models. Like SB 1047, the bill focuses specifically on the worst kind of AI risks: human death, cyberattacks and the ability to contribute to chemical weapons. Governor Newsom is considering several other bills that address other types of AI risks, such as fellow AI engagement optimization technology.
TechCrunch Events
San Francisco
|
October 27th-29th, 2025
SB 53 establishes Calcompute, a national operational cloud computing cluster, to create protected channels for employees working at AI Labs, report safety concerns to government officials and provide AI research resources beyond the major high-tech companies.
One reason why the SB 53 is more popular than the SB 1047 is that it is not so serious. SB 1047 also held AI companies responsible for the harm caused by the AI model, whereas SB 53 focuses on requiring self-report and transparency. SB 53 is narrowly applicable not to startups but to the world’s largest high-tech companies.
But many in the tech industry still believe that states should leave AI regulation to the federal government. In a recent letter to Governor Newsom, Openai argued that AI labs should only adhere to federal standards. This is interesting to say to the governor. Venture company Andreessen Horowitz has written a recent blog post that vaguely suggests that several California bills could violate the dormant commercial provisions of the Constitution.
Senator Wiener is addressing these concerns. He lacks federal faith in passing meaningful AI safety regulations, which means that states need to step up. In fact, Wiener believes that the Trump administration is being captured by the tech industry and that recent federal efforts to block AI laws in all states are in the form of Trump’s “rewarding his funders.”
The Trump administration has made a prominent transition from the Biden administration’s focus on AI security, with a focus on growth. Shortly after taking office, Vice President JD Vance appeared at an AI conference in Paris, saying, “I am not here this morning to talk about AI safety, the title of the conference several years ago. I will talk about AI opportunities.”
Silicon Valley praised this shift, illustrated by Trump’s AI action plan. This removed the barriers to building the infrastructure needed to train and serve AI models. Today, Big Tech CEOs regularly watch as they either dine at the White House or announce a $10 billion data center alongside President Trump.
Senator Wiener believes it is important for California to lead the nation in AI security, but it doesn’t suffocate innovation.
I recently interviewed Senator Wiener to discuss his years at the negotiation table with Silicon Valley and why he is focused on AI safety bills. Our conversations are lightly edited for clarity and brevity.
I interviewed you as Senator Wiener, SB 1047, sat at the Newsom Gonder Newsom desk. Talk to me about the journey you have been taking to regulate AI safety over the past few years.
It’s a roller coaster, an incredible learning experience and truly rewarding. Not only in California, but also in national and international discourse, this issue has been raised (in AI safety).
We have this incredibly powerful new technology that is changing the world. How do you ensure that it benefits humanity in ways that reduce risk? How do you promote innovation while being extremely careful about public health and public safety? It is an important and, in a way, existential conversation about the future. SB 1047, and now SB 53, have helped to promote conversations about safe innovation.
What have you learned over the past 20 years of technology about the importance of laws that can explain Silicon Valley?
I am the most representative of San Francisco. I’m quickly north of Silicon Valley itself, so we’re in the middle of all of it. But we have also seen how some of the wealthiest companies in world history are doing what large tech companies have been able to stop federal regulations.
Every time I see a high-tech CEO having dinner at the White House with an avid fascist dictator, I have to take a deep breath. These are all truly amazing people who have created enormous wealth. Many of the people I represent work for them. It’s really painful to see the deals being hit in Saudi Arabia and the United Arab Emirates and how that money gets attention to Trump’s meme coins. That raises me deeply concern.
I’m not an anti-technical person. I want to see technology innovation happening. That’s very important. But this is an industry that we should not trust ourselves to regulate or make voluntary commitments. And that’s not misleading anyone. This is capitalism and can produce enormous prosperity, but it can cause harm if there is no wise regulation to protect the public interest. As for AI safety, we are trying to make that needle into a thread.
SB 53 focuses on the worst harm that AI can cause imaginatively – death, massive cyberattacks, and the creation of the biology era. Why focus there?
The risks of AI vary. There are algorithmic discrimination, unemployment, deep fakes, and fraud. There have been various bills in California and elsewhere to address these risks. The SB 53 was not intended to cover the field and address all the risks created by AI. It focuses on risks in particular, specific categories, from a catastrophic risk perspective.
That problem came organically from people in the AI space in San Francisco – startup founders, frontline AI technicians, and those building these models. They came to me and said, “This is an issue that needs to be addressed in a thoughtful way.”
Do AI systems feel inherently unsafe or can they cause death and massive cyberattacks?
I don’t think they are inherently safe. I know there are many people in these labs who are very deeply interested in trying to mitigate risk. And again, that’s not to rule out risk. Life is about risk. Unless you live in a basement and never leave, you will take risks in your life. Even in your basement, the ceiling may fall.
Is there a risk of using some AI models to do great harm to society? Yes, and I know there are people who want to do that. We should try to make it difficult for bad actors to cause these serious harms, and so should the people who develop these models.
Humanity issued support for SB 53. How about conversations with players in other industries?
We spoke to everyone, including large companies, small startups, investors, academics. Humanity was really constructive. Last year they were never officially supported (SB 1047), but they were saying positive things about aspects of the bill. I don’t think (of humanity) loves every aspect of SB 53, but I think they have concluded that the bill is worth supporting.
I have spoken with a large AI lab that is not in the war, like SB 1047. It’s not surprising. SB 1047 is a liability bill, and SB 53 is a more transparent bill. The bill really focuses on big companies, so startups aren’t involved this year.
Do you feel the pressure from the huge AI PAC that has been formed over the past few months?
This is another symptom of Citizens United. The world’s wealthiest companies can pour endless resources into these PACs and try to blackmail elected officials. Under the rules we have, they have all the rights to do it. It has never really affected the way we approach policies. As long as I was in the elected office, there were groups trying to destroy me. Millions of different groups have spent trying to blow up me. I am trying to do it right by my constituents in this and make my community, San Francisco and the world a better place.
What is your message to Governor Newsom as he discusses whether to sign or refuse this bill?
My message is that we have heard you. Rejected SB 1047 and provided a very comprehensive and thoughtful rejection message. You have wisely convened a working group that has produced a very strong report. We really saw this report in this report. The governor set the path and we hope we have followed that path and come to an agreement and we have arrived there.