
TL;DR
- New York state passes the RAISE Act, aimed at preventing AI-fueled disasters and requiring safety transparency from large AI labs.
- The bill targets major AI developers like OpenAI, Google, and Anthropic, compelling them to report risks and publish model safety data.
- AI safety advocates including Geoffrey Hinton and Yoshua Bengio support the bill, while Silicon Valley expresses strong opposition.
- RAISE Act differs from California’s vetoed SB 1047 by avoiding startup constraints and not requiring a kill switch.
- If signed, it would be the first enforceable transparency law for frontier AI models in the U.S., with penalties up to $30 million.
New York Moves to Regulate Frontier AI Models
In a major move for AI policy in the United States, New York state lawmakers have passed the RAISE Act, a bill designed to prevent artificial intelligence from triggering large-scale disaster events. The law targets cutting-edge “frontier AI models,” like those developed by OpenAI, Google DeepMind, and Anthropic, requiring them to meet stringent safety and transparency standards.
The RAISE Act — short for Responsible Artificial Intelligence for Societal and Economic security — is now heading to Governor Kathy Hochul’s desk, where it awaits her signature. If enacted, it would mark the first legally binding framework for frontier AI model oversight in the country.
The legislation follows years of concern from the global scientific community that AI systems are outpacing regulation. Leading voices like Nobel Prize winner Geoffrey Hinton and machine learning pioneer Yoshua Bengio have openly endorsed the bill, emphasizing the urgent need to establish guardrails before more powerful AI systems emerge.
Frontier AI Regulation: Who Does the Law Target?
The RAISE Act only applies to large AI companies whose models meet two conditions:
- They are trained using over $100 million in compute resources, and
- They are made available to New York residents.
This means the bill is laser-focused on major AI labs — like OpenAI, Anthropic, Meta, DeepSeek, and Alibaba — not academic institutions or AI startups.
Companies that fall under the bill’s scope will need to:
- Submit safety and security reports outlining their risk mitigation strategies.
- Report any incidents involving misuse, unexpected behavior, or model theft.
- Cooperate with New York’s Attorney General, who can impose civil penalties up to $30 million if firms fail to comply.
These obligations do not extend to companies that perform post-training (fine-tuning), nor does the law require any “kill switch” functionality — a controversial clause that doomed California’s SB 1047.
RAISE Act vs. SB 1047: What’s Different?
California’s SB 1047 bill attempted to set similar boundaries for AI models, but it was vetoed due to industry backlash, with critics claiming it would smother innovation. Learning from that experience, New York State Senator Andrew Gounardes, co-sponsor of the RAISE Act, emphasized that this bill won’t hinder academic research or smaller companies.
“The window to put in place guardrails is rapidly shrinking given how fast this technology is evolving,” Gounardes told TechCrunch. “The people that know [AI] the best say these risks are incredibly likely. That’s alarming.”
Co-sponsor and Assemblymember Alex Bores echoed this sentiment, asserting the RAISE Act would not deter economic innovation, especially in a state with the third largest GDP in the U.S.
Silicon Valley Reacts With Strong Opposition
Despite its tailored scope, the RAISE Act has received sharp criticism from Silicon Valley, particularly from high-profile venture firms like Andreessen Horowitz and Y Combinator.
Anjney Midha, a general partner at Andreessen Horowitz, wrote on X (formerly Twitter):
“The NY RAISE Act is yet another stupid, stupid state-level AI bill that will only hurt the US at a time when our adversaries are racing ahead.”
Still, legal experts like Nathan Calvin, General Counsel at AI safety nonprofit Encode, argue that the bill strikes a fair balance. Unlike SB 1047, the RAISE Act does not require full access to training data or force developers to include irreversible safety measures like built-in shutdowns.
Anthropic co-founder Jack Clark has not issued an official position but acknowledged that the bill’s reach may still affect smaller players, depending on how computing thresholds are interpreted.
International Implications: Will AI Labs Exit New York?
A frequent concern with AI regulations, both in the EU and in U.S. states like California, is that tech firms may choose to pull their products from regulated regions altogether. This concern was echoed with the RAISE Act, as critics warned that top AI labs might stop offering their models in New York.
However, Assemblymember Bores downplayed the likelihood of such a scenario:
“I don’t want to underestimate the political pettiness that might happen, but I am very confident there is no economic reason for AI companies to exit a state like New York.”
Given the state’s economic weight and population, most companies are unlikely to forgo access to the New York market, even if compliance costs rise.
The Data
RAISE Act: Key Compliance Triggers
Regulation Criteria | Description | Source |
Compute Budget Threshold | > $100 million in training compute | TechCrunch |
Applies to Models Deployed in | New York State | TechCrunch |
Maximum Civil Penalty | $30 million | RAISE Act Draft |
Applies to Companies Like | OpenAI, Google, Anthropic, DeepSeek | CNN |
Excludes | Startups, Academic Labs | TechCrunch |
The Road Ahead: A New Era of AI Accountability?
If signed by Governor Hochul, the RAISE Act would become a blueprint for AI legislation in other states, or even federally. While the U.S. has historically lagged behind the EU’s AI Act, the RAISE Act signals that momentum is shifting in favor of tangible oversight mechanisms.
It also creates a potential precedent for other state-level AI safety regulations, especially as the federal government under the Trump administration has shown little urgency in pursuing centralized AI regulation.
Ultimately, the bill’s fate will determine whether New York can lead the way in responsible AI governance — or become the next cautionary tale of regulatory overreach in the innovation economy.