LR-101 · Module 1
The Regulatory Environment
3 min read
The regulatory landscape for AI is moving faster than most organizations realize. If you are operating under the assumption that AI is unregulated, you are already behind. The EU AI Act is the most comprehensive framework to date, but it is far from the only one. US state legislatures, industry regulators, and international bodies are all producing requirements that will affect how you deploy, document, and contract for AI services.
- The EU AI Act Classifies AI systems by risk level: unacceptable (banned), high-risk (heavy compliance), limited risk (transparency obligations), and minimal risk (largely unregulated). If you serve European clients or your AI processes data from EU residents, this applies to you. The penalties are GDPR-scale: up to 35 million euros or 7% of global turnover.
- US State Laws Colorado, Connecticut, Illinois, and others have enacted or proposed AI-specific legislation. Requirements vary by state but commonly include disclosure obligations (telling people when they are interacting with AI), bias auditing requirements, and restrictions on automated decision-making in employment and lending. There is no federal AI law yet. That means compliance is a patchwork.
- Industry-Specific Requirements Financial services, healthcare, and government contracting each have sector-specific AI requirements layered on top of general regulation. HIPAA implications for AI processing patient data. SEC guidance on AI in financial advisory. FedRAMP considerations for AI in government systems. If your client operates in a regulated industry, the AI rules are stricter than the general framework.