Colorado Artificial Intelligence Act (“CAIA”)

Colorado Artificial Intelligence Act (“CAIA”)

C.R.S. 6-1-1701, et seq.

 

Colorado Revised Statutes Annotated

Title 6. Consumer and Commercial Affairs (§§ 6-1-101 — 6-28-102)

Fair Trade and Restraint of Trade (Arts. 1 — 6.5)

Article 1. Colorado Consumer Protection Act (Pts. 1 — 17)

Part 17 Artificial Intelligence (§§ 6-1-1701 — 6-1-1707)

6-1-1701. Definitions.

6-1-1702. Developer duty to avoid algorithmic discrimination - required documentation.

6-1-1703. Deployer duty to avoid algorithmic discrimination - risk management policy and program.

6-1-1704. Disclosure of an artificial intelligence system to consumer.

6-1-1705. Compliance with other legal obligations - definitions.

6-1-1706. Enforcement by attorney general.

6-1-1707. Rules.

 

 

6-1-1701. Definitions.

As used in this part 17, unless the context otherwise requires:

(1)

(a) “Algorithmic discrimination” means any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of this state or federal law.

(b) “Algorithmic discrimination” does not include:

(I) The offer, license, or use of a high-risk artificial intelligence system by a developer or deployer for the sole purpose of:

(A) The developer’s or deployer’s self-testing to identify, mitigate, or prevent discrimination or otherwise ensure compliance with state and federal law; or

(B) Expanding an applicant, customer, or participant pool to increase diversity or redress historical discrimination; or

(II) An act or omission by or on behalf of a private club or other establishment that is not in fact open to the public, as set forth in title II of the federal “Civil Rights Act of 1964”, 42 U.S.C. sec. 2000a (e), as amended.

(2) “Artificial intelligence system” means any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.

(3) “Consequential decision” means a decision that has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of:

(a) Education enrollment or an education opportunity;

(b) Employment or an employment opportunity;

(c) A financial or lending service;

(d) An essential government service;

(e) Health-care services;

(f) Housing;

(g) Insurance; or

(h) A legal service.

(4) “Consumer” means an individual who is a Colorado resident.

(5) “Deploy” means to use a high-risk artificial intelligence system.

(6) “Deployer” means a person doing business in this state that deploys a high-risk artificial intelligence system.

(7) “Developer” means a person doing business in this state that develops or intentionally and substantially modifies an artificial intelligence system.

(8) “Health-care services” has the same meaning as provided in 42 U.S.C. sec. 234 (d)(2).

(9)

(a) “High-risk artificial intelligence system” means any artificial intelligence system that, when deployed, makes, or is a substantial factor in making, a consequential decision.

(b) “High-risk artificial intelligence system” does not include:

(I) An artificial intelligence system if the artificial intelligence system is intended to:

(A) Perform a narrow procedural task; or

(B) Detect decision-making patterns or deviations from prior decision-making patterns and is not intended to replace or influence a previously completed human assessment without sufficient human review; or

(II) The following technologies, unless the technologies, when deployed, make, or are a substantial factor in making, a consequential decision:

(A) Anti-fraud technology that does not use facial recognition technology;

(B) Anti-malware;

(C) Anti-virus;

(D) Artificial intelligence-enabled video games;

(E) Calculators;

(F) Cybersecurity;

(G) Databases;

(H) Data storage;

(I) Firewall;

(J) Internet domain registration;

(K) Internet website loading;

(L) Networking;

(M) Spam- and robocall-filtering;

(N) Spell-checking;

(O) Spreadsheets;

(P) Web caching;

(Q) Web hosting or any similar technology; or

(R) Technology that communicates with consumers in natural language for the purpose of providing users with information, making referrals or recommendations, and answering questions and is subject to an accepted use policy that prohibits generating content that is discriminatory or harmful.

(10)

(a) “Intentional and substantial modification” or “intentionally and substantially modifies” means a deliberate change made to an artificial intelligence system that results in any new reasonably foreseeable risk of algorithmic discrimination.

(b) “Intentional and substantial modification” or “intentionally and substantially modifies” does not include a change made to a high-risk artificial intelligence system, or the performance of a high-risk artificial intelligence system, if:

(I) The high-risk artificial intelligence system continues to learn after the high-risk artificial intelligence system is:

(A) Offered, sold, leased, licensed, given, or otherwise made available to a deployer; or

(B) Deployed;

(II) The change is made to the high-risk artificial intelligence system as a result of any learning described in subsection (10)(b)(I) of this section;

(III) The change was predetermined by the deployer, or a third party contracted by the deployer, when the deployer or third party completed an initial impact assessment of such high-risk artificial intelligence system pursuant to section 6-1-1703 (3); and

(IV) The change is included in technical documentation for the high-risk artificial intelligence system.

(11)

(a) “Substantial factor” means a factor that:

(I) Assists in making a consequential decision;

(II) Is capable of altering the outcome of a consequential decision; and

(III) Is generated by an artificial intelligence system.

(b) “Substantial factor” includes any use of an artificial intelligence system to generate any content, decision, prediction, or recommendation concerning a consumer that is used as a basis to make a consequential decision concerning the consumer.

(12) “Trade secret” has the meaning set forth in section 7-74-102 (4).

History

SOURCE: 

L. 2024: (SB205), ch. 198, § 1, effective May 17, 2024.

Annotations

Research References & Practice Aids

Hierarchy Notes:

C.R.S. Title 6, Art. 1

 

 

6-1-1702. Developer duty to avoid algorithmic discrimination - required documentation.

(1) On and after February 1, 2026, a developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. In any enforcement action brought on or after February 1, 2026, by the attorney general pursuant to section 6-1-1706, there is a rebuttable presumption that a developer used reasonable care as required under this section if the developer complied with this section and any additional requirements or obligations as set forth in rules promulgated by the attorney general pursuant to section 6-1-1707.

(2) On and after February 1, 2026, and except as provided in subsection (6) of this section, a developer of a high-risk artificial intelligence system shall make available to the deployer or other developer of the high-risk artificial intelligence system:

(a) A general statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the high-risk artificial intelligence system;

(b) Documentation disclosing:

(I) High-level summaries of the type of data used to train the high-risk artificial intelligence system;

(II) Known or reasonably foreseeable limitations of the high-risk artificial intelligence system, including known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk artificial intelligence system;

(III) The purpose of the high-risk artificial intelligence system;

(IV) The intended benefits and uses of the high-risk artificial intelligence system; and

(V) All other information necessary to allow the deployer to comply with the requirements of section 6-1-1703;

(c) Documentation describing:

(I) How the high-risk artificial intelligence system was evaluated for performance and mitigation of algorithmic discrimination before the high-risk artificial intelligence system was offered, sold, leased, licensed, given, or otherwise made available to the deployer;

(II) The data governance measures used to cover the training datasets and the measures used to examine the suitability of data sources, possible biases, and appropriate mitigation;

(III) The intended outputs of the high-risk artificial intelligence system;

(IV) The measures the developer has taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination that may arise from the reasonably foreseeable deployment of the high-risk artificial intelligence system; and

(V) How the high-risk artificial intelligence system should be used, not be used, and be monitored by an individual when the high-risk artificial intelligence system is used to make, or is a substantial factor in making, a consequential decision; and

(d) Any additional documentation that is reasonably necessary to assist the deployer in understanding the outputs and monitor the performance of the high-risk artificial intelligence system for risks of algorithmic discrimination.

(3)

(a) Except as provided in subsection (6) of this section, a developer that offers, sells, leases, licenses, gives, or otherwise makes available to a deployer or other developer a high-risk artificial intelligence system on or after February 1, 2026, shall make available to the deployer or other developer, to the extent feasible, the documentation and information, through artifacts such as model cards, dataset cards, or other impact assessments, necessary for a deployer, or for a third party contracted by a deployer, to complete an impact assessment pursuant to section 6-1-1703 (3).

(b) A developer that also serves as a deployer for a high-risk artificial intelligence system is not required to generate the documentation required by this section unless the high-risk artificial intelligence system is provided to an unaffiliated entity acting as a deployer.

(4)

(a) On and after February 1, 2026, a developer shall make available, in a manner that is clear and readily available on the developer’s website or in a public use case inventory, a statement summarizing:

(I) The types of high-risk artificial intelligence systems that the developer has developed or intentionally and substantially modified and currently makes available to a deployer or other developer; and

(II) How the developer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the development or intentional and substantial modification of the types of high-risk artificial intelligence systems described in accordance with subsection (4)(a)(I) of this section.

(b) A developer shall update the statement described in subsection (4)(a) of this section:

(I) As necessary to ensure that the statement remains accurate; and

(II) No later than ninety days after the developer intentionally and substantially modifies any high-risk artificial intelligence system described in subsection (4)(a)(I) of this section.

(5) On and after February 1, 2026, a developer of a high-risk artificial intelligence system shall disclose to the attorney general, in a form and manner prescribed by the attorney general, and to all known deployers or other developers of the high-risk artificial intelligence system, any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk artificial intelligence system without unreasonable delay but no later than ninety days after the date on which:

(a) The developer discovers through the developer’s ongoing testing and analysis that the developer’s high-risk artificial intelligence system has been deployed and has caused or is reasonably likely to have caused algorithmic discrimination; or

(b) The developer receives from a deployer a credible report that the high-risk artificial intelligence system has been deployed and has caused algorithmic discrimination.

(6) Nothing in subsections (2) to (5) of this section requires a developer to disclose a trade secret, information protected from disclosure by state or federal law, or information that would create a security risk to the developer.

(7) On and after February 1, 2026, the attorney general may require that a developer disclose to the attorney general, no later than ninety days after the request and in a form and manner prescribed by the attorney general, the statement or documentation described in subsection (2) of this section. The attorney general may evaluate such statement or documentation to ensure compliance with this part 17, and the statement or documentation is not subject to disclosure under the “Colorado Open Records Act”, part 2 of article 72 of title 24. In a disclosure pursuant to this subsection (7), a developer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the statement or documentation includes information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.

History

SOURCE: 

L. 2024: (SB205), ch. 198, § 1, effective May 17, 2024.

Annotations

Research References & Practice Aids

Hierarchy Notes:

C.R.S. Title 6, Art. 1

 

 

6-1-1703. Deployer duty to avoid algorithmic discrimination - risk management policy and program.

(1) On and after February 1, 2026, a deployer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In any enforcement action brought on or after February 1, 2026, by the attorney general pursuant to section 6-1-1706, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this section and any additional requirements or obligations as set forth in rules promulgated by the attorney general pursuant to section 6-1-1707.

(2)

(a) On and after February 1, 2026, and except as provided in subsection (6) of this section, a deployer of a high-risk artificial intelligence system shall implement a risk management policy and program to govern the deployer’s deployment of the high-risk artificial intelligence system. The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk artificial intelligence system, requiring regular, systematic review and updates. A risk management policy and program implemented and maintained pursuant to this subsection (2) must be reasonable considering:

(I)

(A) The guidance and standards set forth in the latest version of the “Artificial Intelligence Risk Management Framework” published by the national institute of standards and technology in the United States department of commerce, standard ISO/IEC 42001 of the International Organization for Standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements of this part 17; or

(B) Any risk management framework for artificial intelligence systems that the attorney general, in the attorney general’s discretion, may designate;

(II) The size and complexity of the deployer;

(III) The nature and scope of the high-risk artificial intelligence systems deployed by the deployer, including the intended uses of the high-risk artificial intelligence systems; and

(IV) The sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed by the deployer.

(b) A risk management policy and program implemented pursuant to subsection (2)(a) of this section may cover multiple high-risk artificial intelligence systems deployed by the deployer.

(3)

(a) Except as provided in subsections (3)(d), (3)(e), and (6) of this section:

(I) A deployer, or a third party contracted by the deployer, that deploys a high-risk artificial intelligence system on or after February 1, 2026, shall complete an impact assessment for the high-risk artificial intelligence system; and

(II) On and after February 1, 2026, a deployer, or a third party contracted by the deployer, shall complete an impact assessment for a deployed high-risk artificial intelligence system at least annually and within ninety days after any intentional and substantial modification to the high-risk artificial intelligence system is made available.

(b) An impact assessment completed pursuant to this subsection (3) must include, at a minimum, and to the extent reasonably known by or available to the deployer:

(I) A statement by the deployer disclosing the purpose, intended use cases, and deployment context of, and benefits afforded by, the high-risk artificial intelligence system;

(II) An analysis of whether the deployment of the high-risk artificial intelligence system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of the algorithmic discrimination and the steps that have been taken to mitigate the risks;

(III) A description of the categories of data the high-risk artificial intelligence system processes as inputs and the outputs the high-risk artificial intelligence system produces;

(IV) If the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data the deployer used to customize the high-risk artificial intelligence system;

(V) Any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system;

(VI) A description of any transparency measures taken concerning the high-risk artificial intelligence system, including any measures taken to disclose to a consumer that the high-risk artificial intelligence system is in use when the high-risk artificial intelligence system is in use; and

(VII) A description of the post-deployment monitoring and user safeguards provided concerning the high-risk artificial intelligence system, including the oversight, use, and learning process established by the deployer to address issues arising from the deployment of the high-risk artificial intelligence system.

(c) In addition to the information required under subsection (3)(b) of this section, an impact assessment completed pursuant to this subsection (3) following an intentional and substantial modification to a high-risk artificial intelligence system on or after February 1, 2026, must include a statement disclosing the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with, or varied from, the developer’s intended uses of the high-risk artificial intelligence system.

(d) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed by a deployer.

(e) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, the impact assessment satisfies the requirements established in this subsection (3) if the impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subsection (3).

(f) A deployer shall maintain the most recently completed impact assessment for a high-risk artificial intelligence system as required under this subsection (3), all records concerning each impact assessment, and all prior impact assessments, if any, for at least three years following the final deployment of the high-risk artificial intelligence system.

(g) On or before February 1, 2026, and at least annually thereafter, a deployer, or a third party contracted by the deployer, must review the deployment of each high-risk artificial intelligence system deployed by the deployer to ensure that the high-risk artificial intelligence system is not causing algorithmic discrimination.

(4)

(a) On and after February 1, 2026, and no later than the time that a deployer deploys a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall:

(I) Notify the consumer that the deployer has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision before the decision is made;

(II) Provide to the consumer a statement disclosing the purpose of the high-risk artificial intelligence system and the nature of the consequential decision; the contact information for the deployer; a description, in plain language, of the high-risk artificial intelligence system; and instructions on how to access the statement required by subsection (5)(a) of this section; and

(III) Provide to the consumer information, if applicable, regarding the consumer’s right to opt out of the processing of personal data concerning the consumer for purposes of profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer under section 6-1-1306 (1)(a)(I)(C).

(b) On and after February 1, 2026, a deployer that has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer shall, if the consequential decision is adverse to the consumer, provide to the consumer:

(I) A statement disclosing the principal reason or reasons for the consequential decision, including:

(A) The degree to which, and manner in which, the high-risk artificial intelligence system contributed to the consequential decision;

(B) The type of data that was processed by the high-risk artificial intelligence system in making the consequential decision; and

(C) The source or sources of the data described in subsection (4)(b)(I)(B) of this section;

(II) An opportunity to correct any incorrect personal data that the high-risk artificial intelligence system processed in making, or as a substantial factor in making, the consequential decision; and

(III) An opportunity to appeal an adverse consequential decision concerning the consumer arising from the deployment of a high-risk artificial intelligence system, which appeal must, if technically feasible, allow for human review unless providing the opportunity for appeal is not in the best interest of the consumer, including in instances in which any delay might pose a risk to the life or safety of such consumer.

(c)

(I) Except as provided in subsection (4)(c)(II) of this section, a deployer shall provide the notice, statement, contact information, and description required by subsections (4)(a) and (4)(b) of this section:

(A) Directly to the consumer;

(B) In plain language;

(C) In all languages in which the deployer, in the ordinary course of the deployer’s business, provides contracts, disclaimers, sale announcements, and other information to consumers; and

(D) In a format that is accessible to consumers with disabilities.

(II) If the deployer is unable to provide the notice, statement, contact information, and description required by subsections (4)(a) and (4)(b) of this section directly to the consumer, the deployer shall make the notice, statement, contact information, and description available in a manner that is reasonably calculated to ensure that the consumer receives the notice, statement, contact information, and description.

(5)

(a) On and after February 1, 2026, and except as provided in subsection (6) of this section, a deployer shall make available, in a manner that is clear and readily available on the deployer’s website, a statement summarizing:

(I) The types of high-risk artificial intelligence systems that are currently deployed by the deployer;

(II) How the deployer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the deployment of each high-risk artificial intelligence system described pursuant to subsection (5)(a)(I) of this section; and

(III) In detail, the nature, source, and extent of the information collected and used by the deployer.

(b) A deployer shall periodically update the statement described in subsection (5)(a) of this section.

(6) Subsections (2), (3), and (5) of this section do not apply to a deployer if, at the time the deployer deploys a high-risk artificial intelligence system and at all times while the high-risk artificial intelligence system is deployed:

(a) The deployer:

(I) Employs fewer than fifty full-time equivalent employees; and

(II) Does not use the deployer’s own data to train the high-risk artificial intelligence system;

(b) The high-risk artificial intelligence system:

(I) Is used for the intended uses that are disclosed to the deployer as required by section 6-1-1702 (2)(a); and

(II) Continues learning based on data derived from sources other than the deployer’s own data; and

(c) The deployer makes available to consumers any impact assessment that:

(I) The developer of the high-risk artificial intelligence system has completed and provided to the deployer; and

(II) Includes information that is substantially similar to the information in the impact assessment required under subsection (3)(b) of this section.

(7) If a deployer deploys a high-risk artificial intelligence system on or after February 1, 2026, and subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than ninety days after the date of the discovery, shall send to the attorney general, in a form and manner prescribed by the attorney general, a notice disclosing the discovery.

(8) Nothing in subsections (2) to (5) and (7) of this section requires a deployer to disclose a trade secret or information protected from disclosure by state or federal law. To the extent that a deployer withholds information pursuant to this subsection (8) or section 6-1-1705 (5), the deployer shall notify the consumer and provide a basis for the withholding.

(9) On and after February 1, 2026, the attorney general may require that a deployer, or a third party contracted by the deployer, disclose to the attorney general, no later than ninety days after the request and in a form and manner prescribed by the attorney general, the risk management policy implemented pursuant to subsection (2) of this section, the impact assessment completed pursuant to subsection (3) of this section, or the records maintained pursuant to subsection (3)(f) of this section. The attorney general may evaluate the risk management policy, impact assessment, or records to ensure compliance with this part 17, and the risk management policy, impact assessment, and records are not subject to disclosure under the “Colorado Open Records Act”, part 2 of article 72 of title 24. In a disclosure pursuant to this subsection (9), a deployer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the risk management policy, impact assessment, or records include information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.

History

SOURCE: 

L. 2024: (SB205), ch. 198, § 1, effective May 17, 2024.

Annotations

Research References & Practice Aids

Hierarchy Notes:

C.R.S. Title 6, Art. 1

 

 

6-1-1704. Disclosure of an artificial intelligence system to consumer.

(1) On and after February 1, 2026, and except as provided in subsection (2) of this section, a deployer or other developer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available an artificial intelligence system that is intended to interact with consumers shall ensure the disclosure to each consumer who interacts with the artificial intelligence system that the consumer is interacting with an artificial intelligence system.

(2) Disclosure is not required under subsection (1) of this section under circumstances in which it would be obvious to a reasonable person that the person is interacting with an artificial intelligence system.

History

SOURCE: 

L. 2024: (SB205), ch. 198, § 1, effective May 17, 2024.

Annotations

Research References & Practice Aids

Hierarchy Notes:

C.R.S. Title 6, Art. 1

 

 

6-1-1705. Compliance with other legal obligations - definitions.

(1) Nothing in this part 17 restricts a developer’s, a deployer’s, or other person’s ability to:

(a) Comply with federal, state, or municipal laws, ordinances, or regulations;

(b) Comply with a civil, criminal, or regulatory inquiry, investigation, subpoena, or summons by a federal, a state, a municipal, or other governmental authority;

(c) Cooperate with a law enforcement agency concerning conduct or activity that the developer, deployer, or other person reasonably and in good faith believes may violate federal, state, or municipal laws, ordinances, or regulations;

(d) Investigate, establish, exercise, prepare for, or defend legal claims;

(e) Take immediate steps to protect an interest that is essential for the life or physical safety of a consumer or another individual;

(f) By any means other than the use of facial recognition technology, prevent, detect, protect against, or respond to security incidents, identity theft, fraud, harassment, malicious or deceptive activities, or illegal activity; investigate, report, or prosecute the persons responsible for any such action; or preserve the integrity or security of systems;

(g) Engage in public or peer-reviewed scientific or statistical research in the public interest that adheres to all other applicable ethics and privacy laws and is conducted in accordance with 45 CFR 46, as amended, or relevant requirements established by the federal food and drug administration;

(h) Conduct research, testing, and development activities regarding an artificial intelligence system or model, other than testing conducted under real-world conditions, before the artificial intelligence system or model is placed on the market, deployed, or put into service, as applicable; or

(i) Assist another developer, deployer, or other person with any of the obligations imposed under this part 17.

(2) The obligations imposed on developers, deployers, or other persons under this part 17 do not restrict a developer’s, a deployer’s, or other person’s ability to:

(a) Effectuate a product recall; or

(b) Identify and repair technical errors that impair existing or intended functionality.

(3) The obligations imposed on developers, deployers, or other persons under this part 17 do not apply where compliance with this part 17 by the developer, deployer, or other person would violate an evidentiary privilege under the laws of this state.

(4) Nothing in this part 17 imposes any obligation on a developer, a deployer, or other person that adversely affects the rights or freedoms of a person, including the rights of a person to freedom of speech or freedom of the press that are guaranteed in:

(a) The first amendment to the United States constitution; or

(b) Section 10 of article II of the state constitution.

(5) Nothing in this part 17 applies to a developer, a deployer, or other person:

(a) Insofar as the developer, deployer, or other person develops, deploys, puts into service, or intentionally and substantially modifies, as applicable, a high-risk artificial intelligence system:

(I) That has been approved, authorized, certified, cleared, developed, or granted by a federal agency, such as the federal food and drug administration or the federal aviation administration, acting within the scope of the federal agency’s authority, or by a regulated entity subject to the supervision and regulation of the federal housing finance agency; or

(II) In compliance with standards established by a federal agency, including standards established by the federal office of the national coordinator for health information technology, or by a regulated entity subject to the supervision and regulation of the federal housing finance agency, if the standards are substantially equivalent or more stringent than the requirements of this part 17;

(b) Conducting research to support an application for approval or certification from a federal agency, including the federal aviation administration, the federal communications commission, or the federal food and drug administration or research to support an application otherwise subject to review by the federal agency;

(c) Performing work under, or in connection with, a contract with the United States department of commerce, the United States department of defense, or the national aeronautics and space administration, unless the developer, deployer, or other person is performing the work on a high-risk artificial intelligence system that is used to make, or is a substantial factor in making, a decision concerning employment or housing; or

(d) That is a covered entity within the meaning of the federal “Health Insurance Portability and Accountability Act of 1996”, 42 U.S.C. secs. 1320d to 1320d-9, and the regulations promulgated under the federal act, as both may be amended from time to time, and is providing health-care recommendations that:

(I) Are generated by an artificial intelligence system;

(II) Require a health-care provider to take action to implement the recommendations; and

(III) Are not considered to be high risk.

(6) Nothing in this part 17 applies to any artificial intelligence system that is acquired by or for the federal government or any federal agency or department, including the United States department of commerce, the United States department of defense, or the national aeronautics and space administration, unless the artificial intelligence system is a high-risk artificial intelligence system that is used to make, or is a substantial factor in making, a decision concerning employment or housing.

(7) An insurer, as defined in section 10-1-102 (13), a fraternal benefit society, as described in section 10-14-102, or a developer of an artificial intelligence system used by an insurer is in full compliance with this part 17 if the insurer, the fraternal benefit society, or the developer is subject to the requirements of section 10-3-1104.9 and any rules adopted by the commissioner of insurance pursuant to section 10-3-1104.9.

(8)

(a) A bank, out-of-state bank, credit union chartered by the state of Colorado, federal credit union, out-of-state credit union, or any affiliate or subsidiary thereof, is in full compliance with this part 17 if the bank, out-of-state bank, credit union chartered by the state of Colorado, federal credit union, out-of-state credit union, or affiliate or subsidiary is subject to examination by a state or federal prudential regulator under any published guidance or regulations that apply to the use of high-risk artificial intelligence systems and the guidance or regulations:

(I) Impose requirements that are substantially equivalent to or more stringent than the requirements imposed in this part 17; and

(II) At a minimum, require the bank, out-of-state bank, credit union chartered by the state of Colorado, federal credit union, out-of-state credit union, or affiliate or subsidiary to:

(A) Regularly audit the bank’s, out-of-state bank’s, credit union chartered by the state of Colorado’s, federal credit union’s, out-of-state credit union’s, or affiliate’s or subsidiary’s use of high-risk artificial intelligence systems for compliance with state and federal anti-discrimination laws and regulations applicable to the bank, out-of-state bank, credit union chartered by the state of Colorado, federal credit union, out-of-state credit union, or affiliate or subsidiary; and

(B) Mitigate any algorithmic discrimination caused by the use of a high-risk artificial intelligence system or any risk of algorithmic discrimination that is reasonably foreseeable as a result of the use of a high-risk artificial intelligence system.

(b) As used in this subsection (8):

(I) “Affiliate” has the meaning set forth in section 11-101-401 (3.5).

(II) “Bank” has the meaning set forth in section 11-101-40 (5).

(III) “Credit union” has the meaning set forth in section 11-30-101 (1)(a).

(IV) “Out-of-state bank” has the meaning set forth in section 11-101-401 (50).

(9) If a developer, a deployer, or other person engages in an action pursuant to an exemption set forth in this section, the developer, deployer, or other person bears the burden of demonstrating that the action qualifies for the exemption.

History

SOURCE: 

L. 2024: (SB205), ch. 198, § 1, effective May 17, 2024.

Annotations

Research References & Practice Aids

Hierarchy Notes:

C.R.S. Title 6, Art. 1

 

 

6-1-1706. Enforcement by attorney general.

(1) Notwithstanding section 6-1-103, the attorney general has exclusive authority to enforce this part 17.

(2) Except as provided in subsection (3) of this section, a violation of the requirements established in this part 17 constitutes an unfair trade practice pursuant to section 6-1-105 (1)(hhhh).

(3) In any action commenced by the attorney general to enforce this part 17, it is an affirmative defense that the developer, deployer, or other person:

(a) Discovers and cures a violation of this part 17 as a result of:

(I) Feedback that the developer, deployer, or other person encourages deployers or users to provide to the developer, deployer, or other person;

(II) Adversarial testing or red teaming, as those terms are defined or used by the national institute of standards and technology; or

(III) An internal review process; and

(b) Is otherwise in compliance with:

(I) The latest version of the “Artificial Intelligence Risk Management Framework” published by the national institute of standards and technology in the United States department of commerce and standard ISO/IEC 42001 of the International Organization for Standardization;

(II) Another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements of this part 17; or

(III) Any risk management framework for artificial intelligence systems that the attorney general, in the attorney general’s discretion, may designate and, if designated, shall publicly disseminate.

(4) A developer, a deployer, or other person bears the burden of demonstrating to the attorney general that the requirements established in subsection (3) of this section have been satisfied.

(5) Nothing in this part 17, including the enforcement authority granted to the attorney general under this section, preempts or otherwise affects any right, claim, remedy, presumption, or defense available at law or in equity. A rebuttable presumption or affirmative defense established under this part 17 applies only to an enforcement action brought by the attorney general pursuant to this section and does not apply to any right, claim, remedy, presumption, or defense available at law or in equity.

(6) This part 17 does not provide the basis for, and is not subject to, a private right of action for violations of this part 17 or any other law.

History

SOURCE: 

L. 2024: (SB205), ch. 198, § 1, effective May 17, 2024.

Annotations

Research References & Practice Aids

Hierarchy Notes:

C.R.S. Title 6, Art. 1

 

 

6-1-1707. Rules.

(1) The attorney general may promulgate rules as necessary for the purpose of implementing and enforcing this part 17, including:

(a) The documentation and requirements for developers pursuant to section 6-1-1702 (2);

(b) The contents of and requirements for the notices and disclosures required by sections 6-1-1702 (5) and (7); 6-1-1703 (4), (5), (7), and (9); and 6-1-1704;

(c) The content and requirements of the risk management policy and program required by section 6-1-1703 (2);

(d) The content and requirements of the impact assessments required by section 6-1-1703 (3);

(e) The requirements for the rebuttable presumptions set forth in sections 6-1-1702 and 6-1-1703; and

(f) The requirements for the affirmative defense set forth in section 6-1-1706 (3), including the process by which the attorney general will recognize any other nationally or internationally recognized risk management framework for artificial intelligence systems.

History

SOURCE: 

L. 2024: (SB205), ch. 198, § 1, effective May 17, 2024.

Annotations

Research References & Practice Aids

Hierarchy Notes:

C.R.S. Title 6, Art. 1

 

 

For more information, see here:  https://advance.lexis.com/documentpage/?pdmfid=1000516&crid=7b93fafe-f568-4871-9f14-b3a2177f9122&nodeid=AAGAABAABAARAAB&nodepath=%2FROOT%2FAAG%2FAAGAAB%2FAAGAABAAB%2FAAGAABAABAAR%2FAAGAABAABAARAAB&level=5&haschildren=&populated=false&title=6-1-1701.+Definitions.&config=014FJAAyNGJkY2Y4Zi1mNjgyLTRkN2YtYmE4OS03NTYzNzYzOTg0OGEKAFBvZENhdGFsb2d592qv2Kywlf8caKqYROP5&pddocfullpath=%2Fshared%2Fdocument%2Fstatutes-legislation%2Furn%3AcontentItem%3A6C70-KFJ3-RSXR-S4CN-00008-00&ecomp=6gf59kk&prid=e88d529d-2465-4c9f-9b95-9c1c3482e5f3

 

These materials were obtained directly from the State Legislative websites and are posted here for your review and reference only.  No Claim to Original State Government Works.  This may not be the most recent version.  The State may have more current information.  We make no guarantees or warranties about the accuracy or completeness of this information, or the information linked to.  Please check the linked sources directly.