SB 53 FAQs

California is considering a landmark bill called SB 53 that would require the world's largest AI companies to be more transparent about how they manage catastrophic risks from their most powerful models. This FAQ answers the questions that policymakers, AI developers, and concerned citizens most commonly ask about SB 53. It explains who would be affected, what companies would be required to do, and how the bill balances safety with promoting innovation.

For those seeking deeper technical and political context, I've also created SB53.info, an annotated version of the bill text.

How will SB 53 affect startups?

SB 53 will not impose new regulations on early-stage startups. The bill's transparency and safety requirements only apply to "large developers," which are defined in § 22757.11.g as persons who have trained or begun training a foundational model with more than 102610^{26} floating point operations (FLOPs) and have over $100 million in gross annual revenue. To date, only two developers are known to have met both of these conditions: xAI and OpenAI. Both are extremely well resourced private companies. xAI was valued at $18 billion in early 2024 and is reported to be worth as much as $200 billion now. OpenAI was valued at $300 billion after a fundraising round in April 2025.

Only OpenAI and xAI are publicly known to have trained models with more than 102610^{26} FLOPs, and neither company has stated how much the training runs in question cost. But based on public information about AI GPU prices, energy prices, and datacenter construction costs, experts estimate that a 102610^{26} FLOP training run would cost in the high tens of millions of dollars.[1] This is prohibitively expensive for early-stage startups.

SB 53 does not fix the definition of "large developer" once and for all. It gives the California Attorney General the power to update the definition as needed "to ensure that it accurately reflects technological developments, scientific literature, and widely accepted national and international standards and applies to well-resourced large developers at the frontier." The AG is explicitly forbidden from updating the definition in such a way that "less well-resourced developers, or developers significantly behind the frontier" would count as large developers unless the California legislature authorizes such an update by passing new legislation.[2]

How will SB 53 affect open source?

SB 53's transparency and incident reporting rules apply to all large AI developers regardless of whether they open-source their models or not.[3] A large open source developer must publish and follow a safety policy, release a model card for every frontier model it deploys, and report critical safety incidents to the AG just like a closed source developer would have to do.

However, SB 53 is designed to avoid placing unreasonable burdens on open source developers.

First, it only applies to the very largest and wealthiest AI developers—those training models with over 102610^{26} FLOPs and earning over $100 million annually. This high threshold means that small open source developers who would struggle to afford compliance costs aren't affected at all.

Second, the bill's incident response requirements are flexible enough that open source developers can still meet them despite having less control over their models after deployment. Developers aren't penalized for failing to report safety incidents that they couldn't have known about, such as misuse of an open source model. And safety policies only have to include an emergency shutdown procedure "to the extent that the foundation model is controlled by the large developer."[4]

In practice, this means large open source developers would need to follow internal safety protocols and be transparent about their risk assessments, but they would not be held responsible for controlling models after public release.

How does SB 53 relate to the California Report on Frontier AI Policy?

The California Report is an expert report on frontier AI safety and governance. It was commissioned in September 2024 by Governor Gavin Newsom and published in June 2025, shortly before SB 53 was amended. The Report surveys approaches California could take to AI governance, putting them in technical and historical context and noting their advantages and drawbacks. It does not advocate for specific policies, but it does recommend principles for future policy to follow and goals for it to aim at.

SB 53 is directly inspired by the principles outlined in the California Report. The bill's sponsor has publicly said as much, and the evidence is all over the bill.

The California Report's strongest recommendation is for AI developers to make themselves more transparent to the public. "Transparency is a fundamental prerequisite of social responsibility and accountability," it says, because "without sufficient understanding of industry practices, the public cannot characterize the societal impact of digital technologies, let alone propose concrete actions to improve business practices." The Report does not recommend specific transparency mechanisms such as safety policies and model cards, but it does name key areas where transparency is most desirable. It says model developers should disclose how they assess the risks associated with their models, what steps they take to mitigate those risks, what they do to secure model weights, and how they test their models before deployment. SB 53 would require large AI developers to publicly reveal all of this information in their safety policies.[5]

The California Report recommends government establish an adverse event reporting system for incidents involving AI. It says "to better understand the practical impact and, in particular, the incidents or harms associated with AI, we need to develop better post-deployment monitoring."[6] SB 53 puts this recommendation into practice by having the California AG create a system for reporting and documenting critical safety incidents involving AI.

The Report says in its list of key principles that "clear whistleblower protections…can enable increased transparency above and beyond information disclosed by foundation model developers." It later goes on to survey existing whistleblower laws and finds that while all formal employees within most private sector organizations are generally already protected, "a central question in the AI context is whether protections apply to additional parties, such as contractors. Broader coverage may provide stronger accountability benefits."[7] SB 53 would provide this broader coverage by granting whistleblower protection to contractors, freelancers, and unpaid advisors of large AI developers.

Does SB 53 create a new regulatory agency?

No, SB 53 does not create a new agency. Instead, it gives the California Attorney General power to enforce the new transparency rules. The AG monitors AI companies' compliance through three information channels: annual reports from external auditors, critical safety incident reports filed by the companies themselves, and tips from whistleblowers within companies.[8] When these information sources lead the AG to believe a large developer is violating SB 53's transparency requirements, the AG can bring a civil suit against the developer. Violations carry fines ranging from $10,000 for a minor infraction to $10 million for repeated knowing violations that cause catastrophic risk.

Will SB 53 contribute to a regulatory patchwork?

Some critics of state-level AI regulation warn that if each state sets its own AI regulation independently, the US could end up with an incoherent state-by-state patchwork of conflicting rules. Such a patchwork could harm small AI developers by driving up compliance costs beyond what they can afford. Eventually small developers would go out of business, locking in today's leading AI companies and slowing innovation.

This is a legitimate worry, but SB 53 is unlikely to create any problematic patchwork effects for two reasons. First, SB 53 does not impose any new regulations on small developers. It only targets large companies worth billions of dollars who can easily afford the cost of compliance.

Moreover, if we look at major AI regulations currently up for debate in other states, we see that none of SB 53's headline transparency requirements are unique to California. The RAISE Act in New York and the AI Safety and Security Transparency Act in Michigan would both require every large AI developer[9] to publish and follow a safety policy. The New York bill would also require large developers to report safety incidents to the state Attorney General. And the Michigan bill would also require every large developer to release model cards for all of their frontier systems and to commission an annual independent audit of their compliance with the safety policy. The upshot is that if an AI company is already in compliance with NY RAISE and the Michigan Transparency Act, their marginal cost of complying with SB 53 should be minimal, as all of the major transparency requirements in SB 53 are also in one or both of those other bills.

Does SB 53 introduce new liability for AI harms?

No, SB 53 does not expand AI companies' liability for harms caused by their models. The bill only makes companies civilly liable for transparency violations, not for anything their models do. A large AI developer can be sued and fined for procedural failures—such as neglecting to publish a safety policy, breaking their own safety policy, or publishing a false or misleading model card. But as long as an AI company is being transparent, if something goes wrong and one of their systems causes a catastrophe, the company cannot be sued under SB 53. They are no more liable than they would be under existing law.

Will SB 53 permit private lawsuits against AI developers?

No, SB 53 does not give private actors standing to sue AI developers over transparency violations. The bill states very clearly that only the California Attorney General can bring a civil action against a large developer for breaking the transparency rules.[10]

Separately, the chapter on whistleblower protections allows a whistleblower who believes they've suffered retaliation from a large AI developer to bring a civil action against that developer. If the whistleblower wins their case, the court can grant them injunctive relief from the retaliation they've suffered plus attorney's fees. All of this is standard for whistleblower protection laws. Section 1102.5 of the California Labor Code allows a whistleblower who believes they've suffered retaliation to sue their employer for relief from retaliation plus attorney's fees plus a cash bounty of up to $10,000. The AI Whistleblower Protection Act currently up for debate in the US Senate would also allow a whistleblower who alleges retaliation by their employer to bring a private action against the AI company in question unless the Department of Labor resolves the allegation on their behalf within 180 days.

Does SB 53 require every model to have a kill switch?

No, SB 53 does not directly require any model to have a kill switch. The bill requires every large AI developer's safety policy to state whether they have the "ability to promptly shut down copies of [their] foundation models" in the event of a critical safety incident. But there is no requirement for a large developer to maintain this ability, and the language of the bill acknowledges that appropriate incident response will look different for open source developers than it looks for closed source developers.[11]

Some AI companies might choose to build kill switches into their systems voluntarily. For instance, Anthropic's Responsible Scaling Policy states (§ 7.1) that they are developing procedures to restrict access to their models in the event of a safety emergency. But SB 53 won't force other companies to do the same unless they want to.

How does SB 53 compare to the EU AI Act?

The AI Act is an EU regulation that sets standards for AI companies operating in Europe. Among other things, the Act lays down safety, security, and transparency rules for providers of general purpose AI models, such as the large developers who would be subject to SB 53. Although the AI Act is not a US law, it still applies to American AI companies that deploy their models within the EU, and all of the leading US companies have agreed to comply with it.

Article 55 of the AI Act overlaps substantially with SB 53. The associated Code of Practice—an official guide that tells companies what they can do to follow the Act—requires every frontier AI company to write and implement a safety and security framework saying how they will assess and manage severe risks from their models. The content companies have to put in these frameworks is even more comprehensive than the content required in SB 53's safety policies. Both laws also require every frontier model a company deploys to have a model card (called a "model report" in the Code of Practice).[12]

But SB 53 goes beyond the EU AI Act in three important ways

  1. The AI Act does not mandate public transparency from large developers. The Code of Practice lets companies keep their full safety frameworks private, sending them only to the EU AI Office. Complete model reports likewise go just to the AI Office, not to consumers using the model. Under the Code of Practice, companies only publish summarized versions of these documents "if and insofar as is necessary." In contrast, SB 53 requires a large developer to post their safety policy and model cards prominently on their website for all to read.

  2. The AI Act does nothing for the State of California's awareness of critical safety incidents. It requires an AI company to notify the EU AI Office and their national government of any serious incidents caused by their models, but it says nothing about notifying regional governments. Under SB 53, a large developer will also be required to inform the California Attorney General when they become aware of a critical incident involving their models.

  3. The AI Act does not require AI companies to get audited for compliance with their own safety policies. SB 53 will require them to do so.

How does SB 53 balance transparency with security and competitive concerns?

As the California Report notes, making AI companies more transparent could have drawbacks for security and for protecting trade secrets. Companies might disclose information that points hostile actors toward vulnerabilities in their models or holes in their internal security. They might also leak trade secrets or confidential IP to their competitors through public transparency disclosures.

SB 53 accounts for these concerns by allowing AI developers to redact their safety policies and model cards as needed "to protect the large developer’s trade secrets, the large developer’s cybersecurity, public safety, or the national security of the United States." These redactions are entirely up to the developer's discretion, so long as they explain in general terms what they redacted and why and retain an unredacted copy of the document for five years (§ 22757.12.f).


[1] See § 3 in Heim and Koessler "Training Compute Thresholds." Their precise estimate is that a 102610^{26} FLOP training run would have cost $70 million in mid 2024.
[2] The relevant paragraphs of the bill are 22757.15.a and 22757.15.c.
[3] In fact, the rules apply to a large developer even if they do not deploy their models at all. In principle, a company could initiate a 102610^{26} FLOP training run and make $100 million of annual revenue without deploying a single model, and they would count as a large developer.
[4] For the precise scope of the incident reporting requirement, see § 22757.13.b, and for the caveat about developers who don't control their models after deployment, see § 22757.12.a.8.
[5] See § 3 of the Report and § 22757.12.a of SB 53.
[6] See § 4 in the Report.
[7] The first quotation comes from pg 4 of the report, and the second from pg 29.
[8] § 22757.14.e requires external auditors to submit a summary of their findings to the California AG within thirty days of auditing a large AI developer. § 22757.13 says that the AG will establish a mechanism for collecting critical safety incident reports, and that any large developer who experiences a critical safety incident will be obliged to report it promptly through the official mechanism.
[9] All three bills use qualitatively similar tests to determine who counts as a large developer. In New York, you're a large developer if you've spent over $5 million on training a single model and over $100 million in aggregate on training all your models. In Michigan, you're a large developer if you've spent over $100 million on training a single model in the last twelve months. And in California, you're a large developer if you've started training a model with over 102610^{26} FLOPs and you made over $100 million of gross revenue in the previous calendar year.
[10] See § 22757.16.c: "A civil penalty described in this section shall be recovered in a civil action brought only by the Attorney General."
[11] See § 22757.12.8.
[12] For more detail on safety and security protocols, see commitment 1 in the chapter on safety and security, and for more on model reports, see commitment 7 in the same chapter.

Thanks to Michael Chen for feedback and to Claude Opus 4 for copyediting.

CC BY-NC-SA 4.0 Miles Kodama. Last modified: August 05, 2025. Built with Franklin.jl.