Colorado Becomes the First State to Pass Comprehensive AI Legislation

On May 17, 2024, Colorado Governor Jared Polis signed the Colorado Artificial Intelligence Act (CAIA), making it the first comprehensive AI legislation in the United States. Similar to the EU AI Act, this law takes a risk-based approach, focusing on high-risk AI systems. It mandates that developers and deployers of these systems exercise reasonable care to prevent algorithmic discrimination. They must also disclose specific information to stakeholders. Additionally, deployers are required to conduct impact assessments, implement risk management plans, and offer consumers a way to appeal adverse decisions. The Colorado Attorney General has sole authority to enforce the CAIA and create related regulations. Developers and deployers can demonstrate they used reasonable care to avoid discrimination by fulfilling their CAIA obligations. The CAIA will come into effect on February 1, 2026.

Scope

The CAIA's main provisions require developers and deployers of high-risk AI systems to prevent discrimination, share specific information with stakeholders, and (for deployers) establish compliance and safety protocols.

The CAIA defines an “artificial intelligence system” as any machine-based system that uses inputs to generate outputs, such as content, decisions, predictions, or recommendations, that influence physical or virtual environments. It applies to “high-risk AI systems,” which are those that significantly contribute to making important decisions. A "substantial factor" is any AI-generated element that influences or determines the outcome of a significant decision, including any AI use to create content or recommendations that affect a consumer's important decisions. A "consequential decision" is one that has a significant legal or material impact on access to or the cost/terms of services like education, employment, financing, government services, healthcare, housing, insurance, or legal services. This definition is similar to “Decisions that Produce Legal or Similarly Significant Effects Concerning a Consumer” under the Colorado Privacy Act, which also includes “criminal justice” and “essential goods or services.”

A “developer” refers to anyone operating in Colorado who creates or significantly modifies an AI system, whether it's a general-purpose or high-risk system. A “deployer” is anyone in Colorado who utilizes a high-risk AI system.

Prohibitions on algorithmic discrimination

The CAIA mandates that developers and deployers of high-risk AI systems exercise reasonable care to shield consumers from known or foreseeable risks of algorithmic discrimination linked to the intended and contracted uses of these systems. “Algorithmic discrimination” refers to any unlawful differential treatment or impact that disadvantages an individual or group based on a protected characteristic. By adhering to the CAIA’s requirements and any rules set by the Colorado Attorney General, developers and deployers can establish a rebuttable presumption that they have taken reasonable care to prevent algorithmic discrimination.

Developer obligations

Developers of high-risk AI systems must:

  • Disclose to deployers (1) a statement detailing specific information about the high-risk AI system, including its training data, data governance practices, known harms, and safeguards, and (2) the information and documentation needed to conduct an impact assessment of the high-risk AI system.
  • Disclose to the public a statement (1) summarizes the types of high-risk AI systems the developer has created or significantly modified, and (2) explains how the developer addresses known or foreseeable risks of algorithmic discrimination for each of these high-risk AI systems.
  • Disclose to the Colorado Attorney General and known deployers must inform about any known or foreseeable risks of algorithmic discrimination from the use of a high-risk AI system within 90 days of discovering these risks or receiving a credible report from a deployer that the system may have caused such discrimination.

Deployer obligations

Deployers of high-risk AI systems must:

  • Establish and maintain a risk management policy and program outlining principles, procedures, and personnel responsible for recognizing and addressing algorithmic discrimination, ensuring it is regularly updated throughout the product lifecycle.
  • Conduct an impact assessment annually and within 90 days of any significant modification, evaluating the effects of the AI system.
  • Annually audit each deployed high-risk AI system to verify that it does not produce algorithmic discrimination.
  • Before using a high-risk AI system to make decisions about consumers, provide those consumers with information about the system, including its purpose, operations, and the nature of the consequential decisions it makes.
  • Inform consumers about their rights under the Colorado Privacy Act to opt out of personal data processing for profiling, particularly for decisions that significantly affect them legally or otherwise.
  • Inform consumers about whom a decision was made of (1) the rationale behind the decision, the role of the high-risk AI system, and the nature and origin of the data utilized; (2) their right to rectify any inaccuracies in personal data processed by the high-risk AI system for decision-making purposes; and (3) their right to request a human review of the decision if technically feasible.
  • Publicly provide a statement outlining the types of high-risk AI systems currently deployed, how the deployer handles known or foreseeable risks of algorithmic discrimination associated with these systems, and details regarding the nature, source, and scope of information collected and utilized by the deployer.
  • Report any identification of algorithmic discrimination to the Colorado Attorney General within 90 days of its discovery.

Small employers with fewer than 50 full-time employees are excused from obligations regarding the risk management program, impact assessment, and public statement. However, they still must adhere to all other requirements, including exercising reasonable care.

Impact assessment interoperability safe harbor.If a deployer conducts an impact assessment to meet the requirements of another applicable law or regulation, that assessment can fulfill the impact assessment requirements of the CAIA.

Disclosure of artificial intelligence systems to consumers

Any individual conducting business in Colorado who offers or utilizes an AI system designed for consumer interaction must inform each consumer engaging with the AI system that they are interacting with an AI system, unless it would be apparent to a reasonable person without disclosure. This requirement extends to all AI systems, not just those deemed high-risk.

Enforcement, civil litigation, and rulemaking

The Colorado Attorney General holds exclusive authority to enforce the CAIA, and it does not allow for private legal action. However, violations of the CAIA in business practices are deemed deceptive trade practices under Colorado's unfair trade laws. Developers and deployers must respond within 90 days to any request for information from the Colorado Attorney General, including risk management policies and impact assessments.

If the Attorney General initiates enforcement, a developer or deployer can defend by showing they discovered and rectified the violation through feedback, adversarial testing, or an internal review, provided they comply with recognized risk management frameworks like NIST's or ISO's standards.

The Attorney General is empowered to issue regulations on developer documentation, notice requirements, risk management policies, impact assessments, rebuttable presumptions, and affirmative defenses under the CAIA.

Conclusion

Organizations involved in developing or deploying AI systems must assess whether their systems fall under the "high-risk" classification. Both developers and deployers of such high-risk AI systems should analyze the similarities and differences between the CAIA and the EU AI Act to tailor their compliance efforts effectively.

As the first comprehensive AI legislation in the United States, the CAIA marks a significant milestone. Our multinational AI teams, spanning various practices, are monitoring legislative and regulatory changes closely and are well-equipped to assist companies in navigating these dynamic challenges.