AS-101 · Module 1
New Tools, New Threats
3 min read
Good news, everyone! AI introduces an entirely new category of security vulnerabilities that your existing training does not cover. Traditional cybersecurity focuses on network perimeters, phishing, and malware. Those threats still exist. But AI adds three new attack surfaces that most organizations have never thought about: exposed API keys, prompt injection, and data exfiltration through AI assistants.
Here is why this matters right now. AI adoption is outpacing security awareness by a factor of ten. Teams are deploying AI tools, connecting them to production data, and granting them API access — all before anyone asks whether those tools are secure. The traditional security review process was built for software that does what you tell it. AI does what you ask it, which is a fundamentally different contract with different failure modes.
- Attack Surface 1: API Keys Every AI service requires an API key. That key is a credential with a billing account attached. If it leaks — hardcoded in a repo, pasted into a chat, stored in plaintext — anyone who finds it can run up charges, access your data, and impersonate your application. API key leakage is the most common AI-specific vulnerability and the easiest to prevent.
- Attack Surface 2: Prompt Injection AI models follow instructions embedded in their input. If user-supplied text can reach the model alongside your system prompt, an attacker can override your instructions. This is not a theoretical risk — it is the most widely exploited AI vulnerability in production systems today.
- Attack Surface 3: Data Exfiltration When employees paste confidential data into AI tools, that data leaves your security perimeter. Depending on the provider, it may be stored, logged, or used for training. The AI assistant becomes an unintended data export channel that bypasses every DLP policy you have.