RC-401i · Module 1

Regulatory Landscape: GDPR, CCPA, and the EU AI Act

5 min read

Enterprise AI deployment in 2026 operates under three overlapping regulatory frameworks, each with different geographic scope, enforcement mechanisms, and penalty structures. Believing that compliance with one satisfies compliance with all is one of the most expensive assumptions an organization can make. Let me be precise about what each framework requires — and where they conflict.

GDPR covers any processing of personal data belonging to EU residents, regardless of where your organization is located. If your AI system processes data about EU residents, GDPR applies. CCPA covers California residents and grants them specific rights over how their personal information is used. The EU AI Act, now in enforcement, classifies AI systems by risk level and imposes conformity assessment, documentation, and human oversight requirements on high-risk systems. These three frameworks are not alternatives. For a US organization deploying AI to a global customer base, all three may apply simultaneously.

Do This

  • Map the geographic distribution of data subjects before deployment begins
  • Classify your AI system under EU AI Act risk tiers (prohibited, high-risk, limited-risk, minimal-risk)
  • Implement data subject rights fulfillment — access, deletion, portability — before go-live
  • Conduct and document a Data Protection Impact Assessment (DPIA) for high-risk processing
  • Establish a records of processing activities (RoPA) entry for every AI workflow

Avoid This

  • Assume US compliance satisfies GDPR requirements
  • Defer privacy engineering to a post-launch phase
  • Rely on vendor SOC 2 reports as a substitute for your own compliance assessment
  • Launch a high-risk AI system without a completed conformity assessment under EU AI Act
  • Treat the right to explanation as optional for automated decision-making systems

The EU AI Act risk classification deserves specific attention because it determines the compliance burden before a single contract is signed. Prohibited systems — those that use subliminal manipulation, real-time biometric surveillance in public spaces, or social scoring by public authorities — cannot be deployed. Full stop. High-risk systems — which include AI in employment decisions, credit assessments, critical infrastructure, education, law enforcement, and essential services — require conformity assessments, technical documentation, human oversight mechanisms, and registration in the EU database of high-risk AI systems before market deployment.

If your AI system touches hiring, performance evaluation, creditworthiness, insurance underwriting, or any law enforcement adjacent function, you are operating in high-risk territory under the EU AI Act. The conformity assessment is not a checkbox. It is a structured evaluation of your system's accuracy, robustness, cybersecurity measures, and human oversight capabilities. Document the assessment. The documentation must be retained for ten years after the system is taken off the market.