A06453 Summary:
BILL NO | A06453B |
  | |
SAME AS | SAME AS S06953-B |
  | |
SPONSOR | Bores |
  | |
COSPNSR | Lasher, Seawright, Paulin, Tapia, Raga, Shimsky, Reyes, Epstein, Burke, Hevesi, Carroll P, Zaccaro, Hyndman, Lupardo, Kassay, Lee, Davila, Schiavoni, Lunsford, Brown K, Tannousis, Torres, Hooks, Gibbs, Romero, Colton, Conrad, Meeks, Glick, Cruz, Cunningham, Forrest, Chandler-Waterman, Stirpe, Wright, Simon, Dais, Jensen, Rozic, Gonzalez-Rojas, Solages, Gallagher, Otis, Kelles, Weprin, Wieder |
  | |
MLTSPNSR | |
  | |
Add Art 44-B §§1420 - 1425, Gen Bus L | |
  | |
Relates to the training and use of artificial intelligence frontier models; defines terms; establishes remedies for violations. |
A06453 Memo:
Go to topNEW YORK STATE ASSEMBLY
MEMORANDUM IN SUPPORT OF LEGISLATION
submitted in accordance with Assembly Rule III, Sec 1(f)   BILL NUMBER: A6453BREVISED 6/12/25 SPONSOR: Bores
  TITLE OF BILL: An act to amend the general business law, in relation to the training and use of artificial intelligence frontier models   PURPOSE OR GENERAL IDEA OF BILL: To allow for safety reports for powerful frontier artificial intelli- gence models to limit critical harm.   SUMMARY OF PROVISIONS: Section 1421 details transparency requirements regarding frontier model training and deployment, including requiring safety plans, disclosing safety incidents, and reporting safety incidents to the attorney general and division of homeland security and emergency services. Section 1422 defines penalties. Section 1423 clarifies the duties and obligations. Section 1424 requires that this article shall only apply to frontier models developed, deployed, or operating in New York state. Section 1425 establishes severability.   JUSTIFICATION: Artificial intelligence is evolving faster than any technology in human history. It is driving groundbreaking scientific advances leading to life-changing medicines, unlocking new creative potential, and automat- ing mundane tasks. At the same time, experts and practitioners in the field readily acknowledge the potential for serious risks. Al companies, leading scientists, and international bodies are preparing for a world in which Al can be used to conduct devastating cyberattacks, aid in the production of bioweapons, and even circumvent controls imposed by devel- opers. In March 2023, more than 1,000 tech leaders signed a letter calling for a "pause for at least 6 months" on training frontier models until inter- national safety standards could be established. The signatories from figures like Rachel Bronson, Steve Wozniak, Andrew Yang, and Elon Musk span the political spectrum and underscore the urgent need for caution (1). Since then, Ai models have gotten exponentially more powerful. We are only a few years away from when Al models will code themselves; already, 25% of the new code from Google's parent company Alphabet is written by Ai (2). In December 2024, Apollo Research tested large Ai models by making them believe they would be shut down; models from every lab test- ed (OpenAi, Anthropic, Google, and Meta) tried to make copies of them- selves on new servers and then lied to humans about their actions (3). Current models are already showing the potential to aid nefarious actors in inflicting harms. China's government has employed Meta's Ai model for both broad military uses (4) and citizen surveillance (5). The Interna- tional Al Safety Report, written by over 100 experts from 30 countries (including the US) and led by the "godfather of Ai" Yoshua Bengio, iden- tified several emerging risks including that an existing model produced plans for biological weapons "rated superior to plans generated by experts with a PhD 72% of the time and provides details that expert evaluators could not find online (6)." Developers of this technology continue to sound the alarm. In reviewing the safety of its latest model, OpenAi stated, "Several of our biology evaluations indicate our models are on the cusp of being able to mean- ingfully help novices create known biological threats, which would cross our high-risk threshold. We expect current trends of rapidly increasing capability to continue, and for models to cross this threshold in the near future (7)." Another leading lab, Anthropic, warned that "the window for proactive risk prevention is closing fast" and that governments must put in place regulation of frontier models by April 2026 at the latest (8). Given New York's legislative calendar, that requires urgent action in our 2025 session. Anthropic also said that while they would prefer action at the federal level, they admitted that "the federal legislative process will not be fast enough to address risks on the timescale about which we're concerned" and "urgency may demand it is instead developed by individual states (9)." Our laws have not kept up. We do not let people do things as mundane as open a daycare center without a safety plan. This bill simply says that companies spending hundreds of millions of dollars to train the most advanced Al models need to take the following common-sense steps: 1. Have a safety plan to prevent severe risks (as most of them already do); 2. Disclose major security incidents, so that no one has to make the same mistake twice; 3. Not release a model that they know causes an unreasonable risk of a catastrophic harm The risks noted above are more than sufficient to justify the measures taken in this bill, but we should be mindful that experts have repeated- ly warned of even more severe threats. In 2023, more than a thousand experts, including the CEOs of Google DeepMind, Anthropic, and OpenAi and many world leading academics signed a letter stating that "mitigat- ing the risk of extinction from Ai should be a global priority alongside other societal-scale risks such as pandemics and nuclear war (10)." In the face of these dangers, we must keep a clear focus on the immense promises of Al. Regulation needs to be targeted and surgical to reduce risks while promoting Ai's benefits. The RAISE Act is designed to do exactly this. Limited to only a small set of very severe risks, the RAISE act will not require many changes from what the vast majority of Ai companies are currently doing; instead, it will simply ensure that no company has an economic incentive to cut corners or abandon its safety plan. Notably, this bill does not attempt to tackle emery issue with Ai: important questions about bias, authenticity, workforce impacts, and other concerns can and should be handled with additional legislation. The RAISE Act focuses on severe risks that could cause over $1 billion in damage or hundreds of deaths or injuries. For these kinds of risks, this bill is the bare minimum that New Yorkers expect.   PRIOR LEGISLATIVE HISTORY: This is a new bill.   FISCAL IMPLICATIONS FOR STATE AND LOCAL GOVERNMENTS: None   EFFECTIVE DATE: This act shall take effect ninety days after it becomes law (1) https://futureoflife.org/open-letter /pause-giant-ai-experiments/ (2) https://fortune.com/2024/10/30 /googles-code-ai-sundar-pichai/ (3) https://www.apolloresearch.ai/research/scherning- reasoning-evalua- tions (4)https://www.reuters.com/technology/artificial- intelligence/chineseresearchersdevelop-ai-model- military-use-back-me- tas-Ilama-2024-11-01/ (5) https://www.nytimes.com/2025/02/21/technology /openai-chinese-sur- veillance.html (6) https://arxiv.org/pC1f/2501.17805 (7) https://cdn.openai.com /deep-research-system-card.pdf (8) https://www.anthropic.com/news/the- case-for-targeted-regulation (9) Ibid (10) https://www.safe.ai/work /statement-on-al-risk