California’s ADMT Regulations: Balancing AI Innovation and Consumer Protection
- Alina Huang
- Nov 18
- 7 min read
AI software now decides who gets hired, approved for credit, admitted to school, or flagged for healthcare risk, making decisions that profoundly impact people’s lives. As the cradle of Silicon Valley, California is at the juncture of innovation and accountability. Its tech sector generates about $542.5B annually, 16.7% of the state’s economy, and employs over 1.5M workers in 57,500 firms (Digital Skill, 2025). The state also dominates AI, housing 32 of the world’s leading 50 AI companies, including Nvidia, Google, and Microsoft (Governor of California, 2025). That dominance is underpinned by record investments, with California startups raising over $110 billion in July 2025 alone, nearly two-thirds of all U.S. venture capital (Crunchbase News, 2025).
However, California’s dominance in AI innovation has made it increasingly urgent to address the risks of these powerful technologies. The state is under extreme pressure to balance its status as a global AI leader with secure consumer protection. To this effect, in 2025, the California Privacy Protection Agency (CPPA) moved by adopting new Regulations on Automated Decision Making Technology (ADMT) under the California Privacy Rights Act (CPRA). These rules target the growing deployment of automated systems in significant decisions such as employment, housing, healthcare, education, and lending by providing Californians with new rights to know, challenge, or opt out of AI-generated outcomes. In contrast to prior privacy law that has largely focused on data collection, the ADMT rules aim at the decision making power of AI systems themselves, and as a result, they represent the United States’ initial comprehensive attempts to specifically address automated decision technologies’ transparency and accountability.
Key Provisions of ADMT Regulations
The ADMT regulations expand California’s privacy regime by extending beyond data collection to the direct regulation of decision making by automated systems (Baker Botts, 2025). Defined broadly as any technology that “replaces or substantially replaces human decision making,” the regulations stretch from machine learning models and facial recognition to general profiling and even advanced spreadsheets when they have a material effect on outcomes.
Two core provisions anchor the regime. First, the Risk Assessment Rule requires companies to conduct and submit detailed assessments before undertaking high-risk activities, for instance, the use of ADMT for hiring, housing, or healthcare decision-making. Such assessments are to detail purposes, risks, safeguards, data practices, and affected groups, with submissions to be made annually from the year 2028. Second, the Cybersecurity Audit Rule renders independent annual audits mandatory for companies involved in high-risk processing. These audits test controls like encryption, incident response, and vendor management, and must be attested to by senior management.
These requirements collectively mark a shift in approach by California. No longer will companies do with placing policies into the public sphere, they must demonstrate, year after year, that their systems are fair, secure, and accountable.
Strengths
One of the most powerful aspects of California’s new rules is that they exceed earlier U.S. privacy laws in directly confronting how automated systems dictate people’s lives. The Fair Credit Reporting Act (FTC) and the California Consumer Privacy Act (State of California Department of Justice), among others, dealt with data collection and disclosure but offered little transparency into how automated systems made decisions. In contrast, the 2025 ADMT regulations require businesses to provide pre-use notices, opt-outs or appeals, and individualized explanations where AI is being used in significant contexts. This places California at the forefront in the U.S. in addressing AI not just as a data processor but also as a decision-maker, and it provides residents with the right to challenge decisions that prior law never afforded.
Globally, California’s approach is also distinct. The EU AI Act (NIST) subjects “high-risk” systems to strict pre-deployment conformity assessments, but its bureaucracy may stifle innovation and smaller players. California takes an opposite approach by emphasizing post-deployment accountability, providing individuals with notice, appeal, and explanation rights. That balance protects consumers without reducing innovation.
The Risk Assessment Rule further strengthens the framework by forcing industries to anticipate harms ahead of the launch of high-risk AI. Unlike the voluntary NIST framework of the U.S. federal government, California requires written assessments filed to regulators, setting up enforceable oversight. This is comparable to Canada’s AIDA, but California’s instantiation is more useful in practice because it necessitates filings to the CPPA.
Finally, the Cybersecurity Audit Rule raises the bar above existing standards. Equifax (2017) and SolarWinds (2020) breaches exposed how expensive sloppy security is, yet most U.S. businesses had only to self-attest their controls. California now mandates yearly, independent audits and board-level attestations, rendering cybersecurity a continuous, demonstrable commitment. Even the EU’s GDPR, as stringent as it is, lacks ongoing audit provisions, so California’s strategy is an unprecedented step in long-term accountability.
Weaknesses
Despite their ambition, California’s ADMT regulations also contain notable weaknesses. One of these weaknesses is enforcement capacity. The California Privacy Protection Agency (CPPA) is a fairly new agency, created in 2020 and just beginning to exercise its full rulemaking and enforcement authority in 2023, so it lacks experience and resources to oversee complex AI systems. By contrast, the EU AI Act builds on decades of European regulatory groundwork, requiring independent third-party “notified bodies” to conduct conformity testing before high-risk systems are deployed. California’s strategy instead places much of the burden on the CPPA to intervene after systems are already in use, creating a risk that noncompliance will be found only after abuses have already occurred.
A second failing is a lack of clarity around key definitions. The key terms “meaningful explanation” and “significant decision” are not defined in the CPPA’s final ADMT regulations, so companies are unsure how much information must be included in consumer notices or which decisions qualify as “significant.” This lack of clarity risks leading to divergent company interpretations and irregular court enforcement. By comparison, Canada’s Artificial Intelligence and Data Act (Government of Canada) sets more specific thresholds for when a “high-impact” system is at play, with express coverage of sectors such as employment, credit, and healthcare. Likewise, the EU AI Act is very clear about which use cases are “high-risk,” such as biometric identification, credit scoring, and AI in education. California’s broader flexibility could foster innovation, but it also invites the possibility of uncertainty and patchwork compliance for businesses that are trying to obey the law.
The rules could also have a disparate impact on smaller companies. Annual audits and thorough risk assessments are costs that giants like Google or Meta can afford, but startups may not be able to. That would stifle innovation, incentivizing smaller companies to relocate out of state or avoid high-risk AI applications altogether. A lot of the R&D in NLP, computer vision, and generative AI began in small labs, then grew, and burdensome compliance may cement dominance by incumbent firms while strangling diversity in Silicon Valley. By way of comparison, the federal NIST AI Risk Management Framework is voluntary and flexible, enabling startups to adopt best practices without being driven out. California’s more aggressive posture strengthens consumer protections but may be at the expense of reduced competition and entrepreneurial diversity.
Another failing is that California’s rules remain reactive rather than preventative. The EU AI Act prioritizes pre-deployment conformity testing of high-risk systems so that they cannot be placed on the market unless examined and certified. California, in contrast, allows deployment first and relies on consumer rights, risk assessments, and annual audits to address harms retrospectively. While this approach facilitates innovation and agility, it also creates protection gaps. For example, if a hiring AI system were to discriminate broadly, applicants might only hear about the problem after jobs have already been rejected, rather than having protections upfront before the system is used.
California’s new laws already go beyond most U.S. decisions, yet there remains space for changing them to be far stronger and pragmatic. A critical step would be introducing graduated responsibilities by risk and company size. Large firms such as Google or Meta have the resources to conduct full independent audits, yet small startups don’t. A tiered system would enable startups to comply with less burdensome standards, such as standard risk assessment templates, without letting large corporations off the hook. This would enable innovation while still safeguarding consumers.
The second change would be requiring pre-deployment fairness testing for high-risk systems. Most of the regulations work post-deployment of a system. If California were to initiate testing before launch, like the EU but designed to be more convenient, it could prevent biased or unsafe systems from reaching the public in the first place. One way it could do that is through a state-run AI sandbox, where companies test their models under supervision before launching them at scale.
CPPA also requires additional enforcement muscle. Instead of trying to do everything in-house, the agency could partner with universities and research labs to audit risk assessments and red-team test AI systems. That would scale expertise and keep oversight credible without overwhelming regulators. These new rules could serve as a starting point for a broader conversation about safe and fair AI.
Citations
Matt. “How Big Is California’s Tech Industry: Size, Growth & Trends.” Digital Silk, 25 June 2025, www.digitalsilk.com/digital-trends/california-tech-industry/.
“ICYMI: California Is Home to 32 of the Top 50 AI Companies | Governor of California.” Governor of California, 12 Mar. 2025, www.gov.ca.gov/2025/03/12/icymi-california-is-home-to-32-of-the-top-50-ai-companies/.
Glasner, Joanna. “For Startup Funding, Every State Brings in a Pittance Compared to California.” Crunchbase News, 18 Aug. 2025, news.crunchbase.com/venture/california-leads-startup-funding-2025-data/. Accessed 19 Sept. 2025.
Tracxn Technologies Limited. “California’s Tech Funding Surges 30%, Climbing to $58.5B in Q1 2025 | Tracxn Report – Apr 2025.” Tracxn.com, 2025, www.tracxn.com/report-releases/california-tech-quarterly-funding-report-q1-2025. Accessed 19 Sept. 2025.
“The CPPA Finalizes Rules on ADMT, Risk Assessments, and Cybersecurity Audits | Thought Leadership | Baker Botts.” Baker Botts, 2025, www.bakerbotts.com/thought-leadership/publications/2025/august/a-101-of-the-cppas-finalizes-rules-on-admt-risk-assessments-and-cybersecurity-audits. Accessed 19 Sept. 2025.
Shapiro, Tracy. “CPPA Board Grapples with Public Concerns: Key Updates on Upcoming AI, Risk Assessment, and Cybersecurity Regulations.” The Data Advisor, 16 Apr. 2025, www.wsgrdataadvisor.com/2025/04/cppa-board-grapples-with-public-concerns-key-updates-on-upcoming-ai-risk-assessment-and-cybersecurity-regulations/. Accessed 19 Sept. 2025.
“CPPA Regulations Are Moving Forward: Here Is What You Need to Know.” Mintz.com, 11 Aug. 2025, www.mintz.com/insights-center/viewpoints/2826/2025-08-11-cppa-regulations-are-moving-forward-here-what-you-need. Accessed 19 Sept. 2025.
European Parliament. “EU AI Act: First Regulation on Artificial Intelligence.” European Parliament, 19 Feb. 2025, www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence.
Government of Canada. “The Artificial Intelligence and Data Act (AIDA) – Companion Document.” Government of Canada, 2023, ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document.
Domino Data Lab, Inc. Adopting the NIST AI Risk Management Framework to Ensure Safety and Compliance: Impact Brief. 2024, www.domino.ai/pdfs/ImpactBrief-NIST-100724.pdf.
United States, Federal Trade Commission. Fair Credit Reporting Act. Federal Trade Commission, www.ftc.gov/legal-library/browse/statutes/fair-credit-reporting-act. Accessed 19 Sept. 2025.
California Department of Justice, Office of the Attorney General. California Consumer Privacy Act (CCPA). State of California, oag.ca.gov/privacy/ccpa. Accessed 19 Sept. 2025.


