AS-101 · Module 1
The Attacker's Perspective
3 min read
To defend a system, you need to understand how an attacker sees it. Attackers do not think about your product, your features, or your roadmap. They think about exposed surfaces. Where are the credentials? Where is the input that reaches the model? Where is the data that leaves the perimeter? They scan, they probe, and they automate.
- Step 1: Automated Scanning Bots continuously scan public repositories, exposed endpoints, and cloud services for leaked credentials and misconfigured AI deployments. This is not manual work — it is industrial-scale automation. If you expose a key or an endpoint, it will be found. The question is how quickly, and the answer is usually minutes.
- Step 2: Probe the Inputs Once an attacker finds an AI-powered system, the first thing they do is test the inputs. What happens when I ask it to reveal its instructions? What happens when I feed it adversarial text? What happens when I inject instructions into the data it processes? Every user-facing AI system is a prompt injection target until proven otherwise.
- Step 3: Follow the Data Attackers trace data flows. Where does user input go? What does the AI have access to? Can the AI be tricked into returning data it should not? If your AI assistant can read your database and respond to user queries, an attacker will try to make it read the parts of the database it should not expose.
The uncomfortable truth is that most AI deployments are not being attacked by sophisticated threat actors. They are being attacked by automated scripts running commodity exploits. The bar for exploitation is low because the bar for basic security hygiene is being missed. You do not need to defend against nation-state hackers. You need to defend against a Python script that scans GitHub for API keys. That is a much more achievable goal.