AS-201b · Module 1
Beyond Traditional Threats
4 min read
Good news, everyone! Everything you learned about traditional security still applies. Network perimeters, authentication, encryption, access control — all still load-bearing. But AI systems introduce an entirely new category of attack surface that your traditional threat model does not cover. The model itself is an attack surface. The prompt is an attack surface. The training data is an attack surface. The output is an attack surface. And here is the part that makes it fascinating and terrifying in equal measure: these surfaces are not independent. They interact.
A traditional web application has a clear boundary between trusted code and untrusted input. You validate the input, sanitize it, and pass it to code that behaves deterministically. An AI system has no such boundary. The model's behavior is probabilistic. The "code" changes with every prompt. User input and system instructions exist in the same context window, and the model cannot fundamentally distinguish between them. This is not a bug that will be fixed in the next release. It is an architectural property of how large language models work.
- Traditional Surfaces (Still Apply) Network exposure, authentication weaknesses, unencrypted data in transit, unpatched dependencies, misconfigured cloud resources. These are the fundamentals from AS-101 and AS-201a. If you skipped those courses, go back. These are prerequisites, not optional reading.
- Model-Specific Surfaces The model accepts instructions from both the system prompt and user input. An attacker who can influence the input can potentially influence the model's behavior. The model may hallucinate, reveal training data patterns, or follow injected instructions. These failure modes have no equivalent in traditional software.
- Data Flow Surfaces Data enters the system as context, leaves as output, and may be stored in conversation history, logs, or vector databases. Each transition point is a potential exfiltration or injection opportunity. Traditional DLP tools were not designed for natural language data channels.
- Integration Surfaces When AI agents connect to external tools — databases, APIs, file systems, email — each integration creates a new attack path. A compromised agent with database access is not just a chatbot problem. It is a data breach waiting to happen.