AS-201b · Module 3
Organizational Security Policies
3 min read
Technical controls are necessary but not sufficient. You can build the most sophisticated defense-in-depth architecture, and one employee pasting the customer database into ChatGPT undoes all of it. Organizational policies are the human layer of AI security — the rules, training, and accountability structures that ensure the people using AI systems do not accidentally bypass the technical controls.
- Define the Data Classification Categorize your organization's data: public, internal, confidential, restricted. For each category, define what AI tools can process it and under what conditions. "Public data can be used with any AI tool. Confidential data requires an enterprise-tier AI tool with a data processing agreement. Restricted data cannot be processed by external AI tools under any circumstances."
- Establish the Tool Allowlist Maintain a list of approved AI tools with their data classification clearance. Claude Enterprise with your DPA covers confidential data. Free-tier ChatGPT covers public data only. Unapproved tools cover nothing. The allowlist eliminates the ambiguity that leads to "I thought it was fine."
- Train on the Policy A policy nobody knows about is a policy nobody follows. Quarterly training sessions — short, practical, scenario-based — ensure that every person who touches AI tools understands the classification system, the allowlist, and the consequences of violation. Make the training boring enough to take seriously and specific enough to be actionable.
- Audit and Enforce Monitor for policy compliance. Review AI tool usage logs. Check for shadow AI — unapproved tools used by teams that did not ask permission. When violations are found, the response should be educational first and punitive second. The goal is compliance, not fear.
Fundamentals aren't boring. Fundamentals are load-bearing.
— DRILL, Ryan Consulting Academy