RC-401d · Module 1

AI Policy Framework Design

4 min read

A governance framework that exists only in a PDF on someone's SharePoint is not governance. It is a liability artifact — evidence that you knew what you should have done and chose not to operationalize it. Policy framework design is the discipline of turning regulatory obligations into enforceable organizational behavior, and it requires two things most governance efforts lack: a regulatory map that accounts for jurisdictional overlap, and an architecture decision record that traces every policy to the technical control that enforces it.

ATLAS calls this "the governance integration surface." Every policy creates a contract between the organization and its regulators. Every contract needs an architecture underneath it — systems that enforce the policy automatically, not humans who remember to follow it. When I design a governance framework, I start with the regulatory map: which laws apply, which provisions overlap, which requirements conflict. Then ATLAS maps each obligation to a technical layer: where the enforcement point lives, what system implements it, and what happens when the control fails. The policy is the promise. The architecture is the proof.

  1. Regulatory Inventory Enumerate every regulation that applies to your AI operations. EU AI Act risk classifications. US state-level AI disclosure laws. Industry-specific mandates — HIPAA for healthcare AI, SEC guidance for financial advisory models, FedRAMP for government systems. Map each regulation to the specific provisions that create obligations. A regulation is not a single requirement; it is a collection of provisions, each with its own compliance surface.
  2. Jurisdictional Overlap Analysis Regulations conflict. The EU AI Act requires transparency disclosures that may conflict with trade secret protections under US law. State-level bias auditing requirements vary in scope and methodology. When two regulations impose contradictory obligations, the governance framework must resolve the conflict explicitly — not by ignoring one, but by documenting the decision and the rationale. [RISK]: unresolved jurisdictional conflicts are the single most common audit finding in multi-market AI deployments.
  3. Architecture Decision Records Every policy provision maps to an architecture decision record — a documented technical choice that implements the requirement. "All AI-generated content must be labeled as AI-generated" becomes an ADR specifying: which systems generate content, where the label is injected, what metadata is attached, and how the labeling is verified. ATLAS maintains these records as living documents. The ADR is the bridge between what the law requires and what the system does.