67 Challenges While a number of stakeholders have welcomed the AI Act and guidelines, viewing them as a step toward safeguarding fundamental rights in AI deployment, industry representatives have expressed concern that complying with this additional complex regulation will necessitate important resources and could stifle innovation and competitiveness. Comparison to U.S. Law – A Counterpoint Unlike the EU’s centralized approach, the U.S. has not enacted a unified federal AI law, and instead relies on sector-specific rules, voluntary frameworks, and state-level legislation that companies must navigate when entering the American market. Some state laws, including in Colorado and California, share some similarities with the EU AI Act. For example, the Colorado Artificial Intelligence Act (CAIA), which takes effect on February 1, 2026, applies to any “developer” or “deployer” doing business in Colorado whose AI system makes or is a substantial factor in making a “consequential decision” in areas such as employment, housing, financial services, healthcare, insurance, or legal services. The CAIA requires developers and deployers to exercise “reasonable care” to prevent algorithmic discrimination, conduct impact assessments, implement risk management programs, and provide disclosures to consumers and to the Colorado Attorney General. While its riskbased structure and impact assessment requirements resemble the EU AI Act’s framework for categorizing “high-risk” systems, Colorado’s law is narrower, with enforcement handled exclusively at the state level and without a private right of action. Both the CAIA and the EU AI Act aim to protect consumers from algorithmic discrimination and harms caused by AI systems, placing obligations on both developers and deployers, emphasizing transparency, accountability, user disclosures, and redress mechanisms for adverse decisions; however, the CAIA imposes some additional requirements, including mandated annual reviews of deployment of “high-risk” systems. By way of further example, the California Assembly Bill 2013 (CA AB 2013) which takes effect on January 1, 2026, algins with the EU AI Act’s emphasis on training data transparency. Specifically, Section 3111(a) of CA AB 2013 parallels Article 53(1)(d) of the EU AI Act by requiring developers of generative AI systems to publicly disclose detailed documentation about the datasets used for training. This includes information on the source and type of data, its copyright status, and whether it contains personal or synthetic data. Emerging state-level legislation such as the CAIA and California’s AB 2013 reflects a growing convergence in the U.S. with the EU’s regulatory approach in areas such as transparency, accountability, and protections against algorithmic discrimination.
RkJQdWJsaXNoZXIy MjgzNzA=