What Can More State Bills Do?

California Senate Bill 53 has at last become law, and with its enactment, the stakes of state AI regulation have changed.

Several other states besides California are considering their own frontier AI transparency bills. New York's RAISE Act has passed both chambers of the state legislature while Michigan's HB 4668 and Massachusetts's S 2630 are waiting in committee. These three state bills all share two important features with SB 53. First, they have similar scope. All four bills focus on catastrophic risks from frontier models developed by large companies. And second, they all emphasize transparency over liability or prescriptive safety rules. These bills are centrally about showing the public how AI companies manage catastrophic risks from their products, not about making the companies follow dictated protocols or holding them liable for AI harms.

One might think SB 53 makes all the other frontier transparency bills redundant. At least for now, frontier AI companies would rather publish SB 53-compliant safety policies and model cards than be slapped with million dollar fines, so the California law is enough on its own to incentivize basic transparency. And this transparency cuts across state borders. Once a frontier developer publishes a safety policy, it can be read in every state, not just in California. So isn't frontier transparency a settled issue? What more can New York, Michigan, and Massachusetts accomplish by passing SB 53 look-alikes?

I say they can still accomplish a lot. SB 53 is a great first step toward making frontier AI companies adequately transparent, but it's vulnerable to repeal or amendment, it's not as strong as it should be, and its requirements eventually need to become federal law. For these three reasons, we still need the other state bills to pass.

Defending against repeal or amendment

SB 53 is law for now, but we can't take it for granted. Most of the AI industry still doesn't like the law[1], and they haven't given up on repealing it or watering it down through legislative amendments. Either could be done with a simple majority in both chambers plus the governor's assent. But the more states pass frontier transparency laws, the heavier a lift it will be for the AI industry to kill them all. Eventually, AI companies will give up on fighting transparency at the state level and turn to the federal level instead—more on that below.

We know the tech industry has enough political influence to get popular legislation weakened because they did it to the California Consumer Privacy Act. When CCPA passed in 2018, it was America's first state-level online privacy law, just as SB 53 is America's first state-level frontier AI regulation. Tech industry groups spent millions of dollars lobbying to water CCPA down, and at first, they got their way. The state legislature passed five amendments in 2019 adding exemptions to the Act's privacy requirements.[2] The campaign against CCPA stopped only once California voters approved Proposition 24, which prospectively voids all legislative amendments that might weaken the Act.

The AI industry cares at least as much about SB 53 as the online advertising industry cared about CCPA, and their PACs have two orders of magnitude more to spend on killing frontier AI regulation than tech spent to walk back online privacy in 2019. In the not-so-unlikely event that they get SB 53 repealed or substantially weakened, we'll want to have transparency laws on the books in other states.

Improving on SB 53 at the state level

I think SB 53 moves the needle non-negligibly on catastrophic risk from AI,[3] but it's not as strong as it should be. In fact, it's not even as strong as it was before the draft bill was amended in September. Other states can thus outdo California by writing measures that got cut from SB 53 into their frontier transparency laws. In particular, I'd be glad to see these three measures revived:

  1. Regular external audits to confirm frontier AI companies are following their safety policies. The July draft of SB 53 would have required every large frontier developer to commission such an external audit annually and to summarize the audit's outcome to the Attorney General. Without these audits, the AG has much less visibility into frontier AI companies, which is unfortunate because only the AG can sue a company for breaking its safety policy. NY RAISE and Michigan HB 4668 would both bring back the auditing requirement, partially addressing this issue.[4]

  2. Mandatory reporting of serious incidents that don't cause death or injury. The September amendments made it so that an AI company only has to report a loss of control incident or frontier model weight exfiltration if it results in death or serious injury.[5] This is a crazy restriction. OpenAI could catch one of their models running a huge rogue internal deployment, and they wouldn't have to report it. Google could discover that the weights of their best model had been stolen by Iran, and they wouldn't have to report it unless they had further evidence that someone was injured. NY RAISE would close both of these reporting loopholes, and HB 4668 would close one of them.

  3. Wider whistleblower protections. The July draft of SB 53 would have given stronger whistleblower protection to anyone performing services for a frontier AI company, including all employees, contractors, advisors, and board members. But the version of SB 53 that became law only improves whistleblower protection for employees "responsible for assessing, managing, or addressing risk of critical safety incidents." Frontier AI companies' safety and risk management teams tend to be a small fraction of their overall headcounts, so these new protections don't apply to most people who will be in a position to notice a company doing something reckless.[6] NY RAISE and HB 5668 would both grant SB 53's new whistleblower protection to all employees of a frontier company.

One more way other state bills can improve on SB 53 is by closing the law's revenue loophole. The California law's full transparency requirements only apply to you if you have both trained or begun training a model with more than 102610^{26} FLOPs of compute, and you earned more than $500 million of gross revenue in the previous calendar year. The latter of these two thresholds is supposed to protect developers too small to afford compliance with the full requirements. But it's doubtful that any developers who meet the training compute threshold really need protecting. If you can afford to drop high tens of millions of dollars on a single training run, surely you can also afford to write a twenty page safety policy. And worse, the revenue threshold may incorrectly rule out frontier developers whose business strategies don't involve earning much revenue. Safe Superintelligence has raised over a billion dollars to train frontier AI models, but as far as is publicly known, it has not earned any revenue. Similarly, Thinking Machines spent the better part of a year training frontier models before they shipped their first product and earned their first revenue. It is unwise for companies like these to be exempt from basic transparency.

Playing for a federal transparency law

Passing more state bills makes it more likely that frontier transparency supporters get what we really want—namely, a strong federal law requiring frontier AI companies to share safety-relevant information with national authorities. One advantage of a federal transparency law is that the federal government has more powers than a state has to respond to a crisis once transparency has brought it to light. Imagine America's leading AI company sends California OES their quarterly assessment of catastrophic risk from internal use, and it shows the company has automated most of their AI R&D. The company also reveals they have weak evidence their AI R&D agent is scheming, and they don't know how to control it reliably. What is California supposed to do in this nightmare situation? It's most likely beyond the state government's power to stop the company's internal AI development, and SB 53's confidentiality provision[7] blocks them from sounding the alarm to federal authorities. But if only the federal government had passed an SB 53 clone, they would have received a copy of the same risk assessment, and they could use their much broader and more flexible powers to mount an effective response.

Another advantage of a federal law is that a federal frontier AI regulator may be able to pool expertise, making it savvier and more effective at interpreting companies' disclosures than a state government would be. A state agency like CA OES won't be able to make much sense of frontier developers' risk assessments unless they have staff with expertise in AI, and hiring such people is not easy. Witness the European AI Office's persistent hiring struggles. If the EU can't entice enough qualified candidates with six figure salaries and the chance to wield real enforcement power, good luck filling the New York Division of Homeland Security's AI team with competent staff. And even if every state succeeds in building its own AI regulator, they will likely be less effective in aggregate than they would be if their staff were all in DC, working as an integrated team.

There are some promising murmurs of support for a federal frontier AI transparency bill, but none has yet been introduced, and it's unclear when there will be enough political will to pass one. If a future AI bill preempts existing state laws (as some are calling for it to do), we should also be ready for AI interests to seize on the ensuing legislative fight as their opportunity to get rid of all transparency requirements.

The good news is that passing more state transparency bills improves our position on both fronts, making Congress more likely to pass a strong transparency bill and less likely to preempt the state bills without a replacement. Each additional state to pass a law like SB 53 sends a signal to national politicians that the public wants frontier AI companies to be transparent. And as companies come into compliance with the state transparency laws, it will become clear to all observers how light and reasonable a burden these laws place on business. Past experience backs this prediction up. Policies that are adopted by more states are historically more likely to become federal policies, and the effect is stronger when a policy is adopted in both red and blue states.[8] This suggests it would be especially valuable to pass Michigan HB 4668, the only frontier transparency bill so far to be introduced in a purple state and sponsored by a Republican.

Making it happen

That's the case for more state bills. If you're convinced, one cheap but valuable step you can take is to write to your elected officials in support of frontier AI transparency. Californians can thank their state reps for passing SB 53. New Yorkers can ask Governor Hochul to sign the RAISE Act. And Massachusetts and Michigan voters can encourage their reps to vote for the transparency bills.


[1] The AI industry does not have a unified stance on SB 53. To Anthropic's credit, they endorsed the bill several weeks before Newsom signed it. But OpenAI, Meta, Google, and Andreessen Horowitz all lobbied against SB 53, and OpenAI's Chris Lehane asked Newsom in an open letter to waive the bill's requirements for all signatories of the GPAI Code of Practice.
[2] The relevant bills were AB 25, AB 874, AB 1146, AB 1355, and AB 1564.
[3] Read—I can tell realistic stories where a mass casualty event that would have happened if not for SB 53 doesn't happen in our world.
[4] RAISE's auditing requirement would go into effect immediately, and HB 4668's would go into effect on the first day of 2026. This fixes a further issue with the July draft's auditing requirement: that it wouldn't have gone into effect until 2030.
[5] See the definition of "critical safety incident" in § 22757.11.d.
[6] One might also worry that if only the safety team is legally empowered to blow the whistle on a catastrophic risk—as opposed to blowing it on a breach of the law—an AI company will try to hide evidence of catastrophic risks from their safety team.
[7] See § 22757.13.b, which promises that assessments of catastrophic risk from internal use are confidential and will only be shared with OES personnel. § 22757.13.e allows the AG and OES to share critical incident reports and AI whistleblower reports with state and federal authorities as they deem appropriate, but it does not apply to internal use risk assessments.
[8] For the first claim, see Connor & Clauset "Predicting the outcomes of policy diffusion from U.S. states to federal law," and for the second, see DellaVigna & Kim "Policy Diffusion and Polarization across U.S. States."

CC BY-NC 4.0 Miles Kodama. Last modified: November 01, 2025. Built with Franklin.jl.