
TL;DR
- California State Senator Scott Wiener reintroduces AI transparency mandates via SB 53, building on lessons from the vetoed SB 1047
- Bill would require top AI developers like OpenAI and Google to publish safety protocols and incident reports
- SB 53 includes whistleblower protections and a proposal for a public AI cloud infrastructure called CalCompute
- Unlike SB 1047, the bill avoids direct liability for AI harms, easing startup concerns
- The bill faces review in the Assembly Committee on Privacy and Consumer Protection
SB 53 Aims to Enforce Transparency, Not Liability
California’s renewed attempt to regulate artificial intelligence safety has taken a more strategic shape in 2025 with the introduction of new amendments to Senate Bill 53 (SB 53). The bill, led by State Senator Scott Wiener, seeks to require major AI companies to publish safety and security protocols and issue public reports following any significant incidents.
Wiener’s earlier attempt — SB 1047 — was vetoed last year after heavy lobbying by the tech sector. This time, Wiener’s team has aligned the proposal more closely with the recommendations of a state AI policy group, led by industry luminaries like Fei-Fei Li of Stanford and World Labs.
“This is still a work in progress,” Wiener noted. “We want this to be the most scientific and fair law it can be.”
CalCompute and Whistleblower Protections in Focus
Among SB 53’s most notable additions is the proposal for CalCompute — a public cloud computing cluster that would give startups and researchers access to state-supported AI infrastructure, potentially lowering barriers to entry for early-stage AI work.
The bill also introduces whistleblower protections for employees at AI companies who believe their tools pose a “critical risk” to society, such as causing mass harm or economic damage exceeding $1 billion.
Importantly, SB 53 avoids the liability provisions that concerned many startups under SB 1047. It also exempts developers who fine-tune existing open-source models or work with models created by large firms.
Momentum Builds Despite Industry Pushback
While some companies like Anthropic have signaled tepid support for AI safety disclosures, tech giants including OpenAI, Google, and Meta have historically resisted state-level regulation. Their reluctance was underscored by Google’s delay in publishing a safety report for Gemini 2.5 Pro, and OpenAI’s omission of a similar document for GPT-4.1.
“Ensuring AI is developed safely should not be controversial,” said Geoff Ralston, former Y Combinator president. “With no serious federal action, states must step up. SB 53 is a thoughtful example of state leadership.”
The Data
Metric / Provision | Detail | Source |
AI Safety Transparency Requirement | Yes, for top developers (e.g., OpenAI, Google, Anthropic) | SB 53 |
Whistleblower Protections Included | Yes — for risks >100 deaths or >$1B damage | TechCrunch |
Liability for AI Harms | Not included in SB 53 | Sen. Wiener’s Press Release |
CalCompute Cloud Proposal | Yes — for startups and researchers | CA AI Policy Group Report |
Bill’s Next Step | Review by Assembly Committee on Privacy and Consumer Protection | California Legislature Tracker |
Related Federal Action | 10-year AI moratorium proposal defeated in Senate (99–1) | U.S. Senate Vote Record |
Regulatory Landscape: California vs. Federal and State Peers
While federal lawmakers briefly considered freezing state AI laws through a 10-year moratorium, the idea was overwhelmingly rejected in a 99–1 Senate vote. This preserved California’s ability to pursue legislation like SB 53—and opened the door for other states to follow suit.
New York is already considering a companion bill: the RAISE Act, with similar safety reporting requirements.
SB 53’s proponents argue it balances consumer safety, innovation freedom, and practical oversight by targeting only the largest model creators, not small developers or open-source enthusiasts.