STATE OF NEW YORK
________________________________________________________________________
6453--B
2025-2026 Regular Sessions
IN ASSEMBLY
March 5, 2025
___________
Introduced by M. of A. BORES, LASHER, SEAWRIGHT, PAULIN, TAPIA, RAGA,
SHIMSKY, REYES, EPSTEIN, BURKE, HEVESI, P. CARROLL, ZACCARO, HYNDMAN,
LUPARDO, KASSAY, LEE, DAVILA, SCHIAVONI, LUNSFORD, K. BROWN, TANNOUS-
IS, TORRES, HOOKS, GIBBS, ROMERO, COLTON, CONRAD, MEEKS, GLICK, CRUZ,
CUNNINGHAM, FORREST, CHANDLER-WATERMAN, STIRPE, WRIGHT, SIMON, DAIS,
JENSEN, ROZIC, GONZALEZ-ROJAS -- read once and referred to the Commit-
tee on Science and Technology -- committee discharged, bill amended,
ordered reprinted as amended and recommitted to said committee --
reported and referred to the Committee on Codes -- committee
discharged, bill amended, ordered reprinted as amended and recommitted
to said committee
AN ACT to amend the general business law, in relation to the training
and use of artificial intelligence frontier models
The People of the State of New York, represented in Senate and Assem-bly, do enact as follows:
1 Section 1. Short title. This act shall be known and may be cited as
2 the "Responsible AI safety and education act" or "RAISE act".
3 § 2. The general business law is amended by adding a new article 44-B
4 to read as follows:
5 ARTICLE 44-B
6 RESPONSIBLE AI SAFETY AND EDUCATION (RAISE) ACT
7 Section 1420. Definitions.
8 1421. Transparency requirements regarding frontier model train-
9 ing and use.
10 1422. Violations.
11 1423. Duties and obligations.
12 1424. Scope.
13 1425. Severability.
14 § 1420. Definitions. As used in this article, the following terms
15 shall have the following meanings:
EXPLANATION--Matter in italics (underscored) is new; matter in brackets
[] is old law to be omitted.
LBD00047-14-5
A. 6453--B 2
1 1. "Appropriate redactions" means redactions to a safety and security
2 protocol that a developer may make when necessary to:
3 (a) protect public safety to the extent the developer can reasonably
4 predict such risks;
5 (b) protect trade secrets;
6 (c) prevent the release of confidential information as required by
7 state or federal law;
8 (d) protect employee or customer privacy; or
9 (e) prevent the release of information otherwise controlled by state
10 or federal law.
11 2. "Artificial intelligence" means a machine-based system that can,
12 for a given set of human-defined objectives, make predictions, recommen-
13 dations, or decisions influencing real or virtual environments, and that
14 uses machine- and human-based inputs to perceive real and virtual envi-
15 ronments, abstract such perceptions into models through analysis in an
16 automated manner, and use model inference to formulate options for
17 information or action.
18 3. "Artificial intelligence model" means an information system or
19 component of an information system that implements artificial intelli-
20 gence technology and uses computational, statistical, or machine-learn-
21 ing techniques to produce outputs from a given set of inputs.
22 4. "Compute cost" means the cost incurred to pay for compute used in
23 the final training run of a model when calculated using the average
24 published market prices of cloud compute in the United States at the
25 start of training such model as reasonably assessed by the person doing
26 the training.
27 5. "Deploy" means to use a frontier model or to make a frontier model
28 foreseeably available to one or more third parties for use, modifica-
29 tion, copying, or a combination thereof with other software, except for
30 training or developing the frontier model, evaluating the frontier model
31 or other frontier models, or complying with federal or state laws.
32 6. "Frontier model" means either of the following:
33 (a) an artificial intelligence model trained using greater than 10§26
34 computational operations (e.g., integer or floating-point operations),
35 the compute cost of which exceeds one hundred million dollars; or
36 (b) an artificial intelligence model produced by applying knowledge
37 distillation to a frontier model as defined in paragraph (a) of this
38 subdivision, provided that the compute cost for such model produced by
39 applying knowledge distillation exceeds five million dollars.
40 7. "Critical harm" means the death or serious injury of one hundred or
41 more people or at least one billion dollars of damages to rights in
42 money or property caused or materially enabled by a large developer's
43 use, storage, or release of a frontier model, through either of the
44 following:
45 (a) The creation or use of a chemical, biological, radiological, or
46 nuclear weapon; or
47 (b) An artificial intelligence model engaging in conduct that does
48 both of the following:
49 (i) Acts with no meaningful human intervention; and
50 (ii) Would, if committed by a human, constitute a crime specified in
51 the penal law that requires intent, recklessness, or gross negligence,
52 or the solicitation or aiding and abetting of such a crime.
53 A harm inflicted by an intervening human actor shall not be deemed to
54 result from a developer's activities unless such activities were a
55 substantial factor in bringing about the harm, the intervening human
56 actor's conduct was reasonably foreseeable as a probable consequence of
A. 6453--B 3
1 the developer's activities, and could have been reasonably prevented or
2 mitigated through alternative design, or security measures, or safety
3 protocols.
4 8. "Knowledge distillation" means any supervised learning technique
5 that uses a larger artificial intelligence model or the output of a
6 larger artificial intelligence model to train a smaller artificial
7 intelligence model with similar or equivalent capabilities as the larger
8 artificial intelligence model.
9 9. "Large developer" means a person that has trained at least one
10 frontier model and has spent over one hundred million dollars in compute
11 costs in aggregate in training frontier models. Accredited colleges and
12 universities shall not be considered large developers under this article
13 to the extent that such colleges and universities are engaging in
14 academic research. If a person subsequently transfers full intellectual
15 property rights of the frontier model to another person (including the
16 right to resell the model) and retains none of those rights for them-
17 self, then the receiving person shall be considered the large developer
18 and shall be subject to the responsibilities and requirements of this
19 article after such transfer.
20 10. "Model weight" means a numerical parameter in an artificial intel-
21 ligence model that is adjusted through training and that helps determine
22 how inputs are transformed into outputs.
23 11. "Person" means an individual, proprietorship, firm, partnership,
24 joint venture, syndicate, business trust, company, corporation, limited
25 liability company, association, committee, or any other nongovernmental
26 organization or group of persons acting in concert.
27 12. "Safety and security protocol" means documented technical and
28 organizational protocols that:
29 (a) Describe reasonable protections and procedures that, if success-
30 fully implemented would appropriately reduce the risk of critical harm;
31 (b) Describe reasonable administrative, technical, and physical
32 cybersecurity protections for frontier models within the large develop-
33 er's control that, if successfully implemented, appropriately reduce the
34 risk of unauthorized access to, or misuse of, the frontier models lead-
35 ing to critical harm, including by sophisticated actors;
36 (c) Describe in detail the testing procedure to evaluate if the fron-
37 tier model poses an unreasonable risk of critical harm and whether the
38 frontier model could be misused, be modified, be executed with increased
39 computational resources, evade the control of its large developer or
40 user, be combined with other software or be used to create another fron-
41 tier model in a manner that would increase the risk of critical harm;
42 (d) Enable the large developer or third party to comply with the
43 requirements of this article; and
44 (e) Designate senior personnel to be responsible for ensuring compli-
45 ance.
46 13. "Safety incident" means a known incidence of critical harm or an
47 incident of the following kinds that occurs in such a way that it
48 provides demonstrable evidence of an increased risk of critical harm:
49 (a) A frontier model autonomously engaging in behavior other than at
50 the request of a user;
51 (b) Theft, misappropriation, malicious use, inadvertent release, unau-
52 thorized access, or escape of the model weights of a frontier model;
53 (c) The critical failure of any technical or administrative controls,
54 including controls limiting the ability to modify a frontier model; or
55 (d) Unauthorized use of a frontier model.
A. 6453--B 4
1 14. "Trade secret" means any form and type of financial, business,
2 scientific, technical, economic, or engineering information, including a
3 pattern, plan, compilation, program device, formula, design, prototype,
4 method, technique, process, procedure, program, or code, whether tangi-
5 ble or intangible, and whether or how stored, compiled, or memorialized
6 physically, electronically, graphically, photographically or in writing,
7 that:
8 (a) Derives independent economic value, actual or potential, from not
9 being generally known to, and not being readily ascertainable by proper
10 means by, other persons who can obtain economic value from its disclo-
11 sure or use; and
12 (b) Is the subject of efforts that are reasonable under the circum-
13 stances to maintain its secrecy.
14 § 1421. Transparency requirements regarding frontier model training
15 and use. 1. Before deploying a frontier model, the large developer of
16 such frontier model shall do all of the following:
17 (a) Implement a written safety and security protocol;
18 (b) Retain an unredacted copy of the safety and security protocol,
19 including records and dates of any updates or revisions. Such unredacted
20 copy of the safety and security protocol, including records and dates of
21 any updates or revisions, shall be retained for as long as a frontier
22 model is deployed plus five years;
23 (c) (i) Conspicuously publish a copy of the safety and security proto-
24 col with appropriate redactions and transmit a copy of such redacted
25 safety and security protocol to the attorney general and division of
26 homeland security and emergency services;
27 (ii) Grant the attorney general and division of homeland security and
28 emergency services or the attorney general access to the safety and
29 security protocol, with redactions only to the extent required by feder-
30 al law, upon request;
31 (d) Record, as and when reasonably possible, and retain for as long as
32 the frontier model is deployed plus five years information on the
33 specific tests and test results used in any assessment of the frontier
34 model required by this section or the developer's safety and security
35 protocol that provides sufficient detail for third parties to replicate
36 the testing procedure; and
37 (e) Implement appropriate safeguards to prevent unreasonable risk of
38 critical harm.
39 2. A large developer shall not deploy a frontier model if doing so
40 would create an unreasonable risk of critical harm.
41 3. A large developer shall conduct an annual review of any safety and
42 security protocol required by this section to account for any changes
43 to the capabilities of their frontier models and industry best practices
44 and, if necessary, make modifications to such safety and security proto-
45 col. If any material modifications are made, the large developer shall
46 publish the safety and security protocol in the same manner as required
47 pursuant to paragraph (c) of subdivision one of this section.
48 4. A large developer shall disclose each safety incident affecting the
49 frontier model to the attorney general and division of homeland security
50 and emergency services within seventy-two hours of the large developer
51 learning of the safety incident or within seventy-two hours of the large
52 developer learning facts sufficient to establish a reasonable belief
53 that a safety incident has occurred. Such disclosure shall include: (a)
54 the date of the safety incident; (b) the reasons the incident qualifies
55 as a safety incident as defined in subdivision thirteen of section four-
A. 6453--B 5
1 teen hundred twenty of this article; and (c) a short and plain statement
2 describing the safety incident.
3 5. A large developer shall not knowingly make false or materially
4 misleading statements or omissions in or regarding documents produced
5 pursuant to this section.
6 § 1422. Violations. 1. The attorney general may bring a civil action
7 for a violation of this article and to recover all of the following,
8 determined based on severity of the violation:
9 (a) For a violation of section fourteen hundred twenty-one of this
10 article, a civil penalty in an amount not exceeding ten million dollars
11 for a first violation and in an amount not exceeding thirty million
12 dollars for any subsequent violation.
13 (b) For a violation of section fourteen hundred twenty-one of this
14 article, injunctive or declaratory relief.
15 2. Nothing in this article shall be construed to establish a private
16 right of action associated with violations of this article.
17 3. Nothing in this subdivision shall be construed to prevent a large
18 developer from asserting that another person, entity, or factor may be
19 responsible for any alleged harm, injury, or damage resulting from a
20 critical harm or a violation of this article.
21 4. This section does not limit the application of other laws.
22 § 1423. Duties and obligations. The duties and obligations imposed by
23 this article are cumulative with any other duties or obligations imposed
24 under other law and shall not be construed to relieve any party from any
25 duties or obligations imposed under other law and do not limit any
26 rights or remedies under existing law.
27 § 1424. Scope. This article shall only apply to frontier models that
28 are developed, deployed, or operating in whole or in part in New York
29 state.
30 § 1425. Severability. If any clause, sentence, paragraph, subdivision,
31 section or part of this article shall be adjudged by any court of compe-
32 tent jurisdiction to be invalid, such judgment shall not affect, impair,
33 or invalidate the remainder thereof, but shall be confined in its opera-
34 tion to the clause, sentence, paragraph, subdivision, section, or part
35 thereof directly involved in the controversy in which such judgment
36 shall have been made.
37 § 3. This act shall take effect on the ninetieth day after it shall
38 have become a law.