LR-201b · Module 1
The EU AI Act in Practice
4 min read
The EU AI Act is the most comprehensive AI regulation in force. If you serve any European client, process any EU resident data, or deploy any AI system that is accessible from the EU, this applies to you. Understanding it in principle is not enough. You need to understand it in practice — what it actually requires you to do, document, and maintain.
- Risk Tier: Unacceptable Banned outright. Social scoring systems, real-time biometric surveillance in public spaces (with narrow exceptions), AI systems that exploit vulnerabilities of specific groups. If your AI use case falls here, the compliance obligation is straightforward: do not deploy it.
- Risk Tier: High Heavy compliance requirements. Includes AI in hiring, credit scoring, education, law enforcement, critical infrastructure, and healthcare. Requirements: risk management system, data governance, technical documentation, record-keeping, transparency to users, human oversight, accuracy and robustness testing. This is the tier where most enterprise AI consulting engagements land.
- Risk Tier: Limited Transparency obligations. Includes chatbots, deepfake generators, and emotion recognition systems. The primary requirement is disclosure — users must be informed they are interacting with AI. If your AI-powered customer service tool does not disclose its AI nature, you are non-compliant.
- Risk Tier: Minimal Largely unregulated under the AI Act, though general data protection rules still apply. Spam filters, AI-powered recommendations for internal use, and similar low-impact applications. Minimal does not mean exempt — it means the AI Act imposes no additional requirements beyond existing law.