AS-201a · Module 3
The Checklist
3 min read
Every pilot runs a pre-flight checklist. Every surgeon runs a pre-procedure checklist. Every professional whose work has consequences runs a checklist before the work begins. If you are deploying an AI agent that will have access to your credentials, your data, and your digital life, you run a checklist too.
This is the checklist. Print it. Pin it to your wall. Run it before every deployment. No exceptions.
- 1. Hosting Isolation Is the agent deployed on dedicated infrastructure separate from your personal devices and network? If the answer is no, stop. Go back to Lesson 5. Do not proceed.
- 2. Credential Separation Are all credentials (hosting, API keys, service tokens, database passwords) unique to this deployment? Do any credentials overlap with personal or other business systems? Check every single one.
- 3. Network Boundary Is there a lateral movement path from the agent to any personal device, data store, or other system? If a hostile actor gained full control of the agent's server, what else could they reach? The answer should be "nothing."
- 4. Authentication Gate Does the agent require authentication before accepting commands? Can an unauthenticated request reach the agent from the public internet? If yes, you are one of the 30,000. Fix it.
- 5. Encryption in Transit Is all communication to and from the agent encrypted (TLS/HTTPS at minimum)? Plaintext transmission means every credential and every command is visible to anyone monitoring the network path.
- 6. Access Logging Enabled Are access attempts logged with timestamps, source IPs, and command details? Can you determine, right now, who accessed the agent in the last 24 hours? If not, enable logging before going live.
- 7. Audit Schedule Set Is there a recurring calendar event — monthly at minimum — to review logs, verify credential isolation, and confirm no new access paths have been created? Put it on the calendar now. Not later. Now.
Here is the test I want you to apply to every agent deployment, including your own. I call it the 30,001st Case Study test.
Imagine that tomorrow, a security researcher scans the internet and finds your agent. They run the same vulnerability assessment that found those 30,000 OpenClaw instances. They check for authentication. They check for encryption. They check for network isolation. They check for credential exposure.
Would your deployment survive that assessment? Or would you be case study number 30,001?
If the answer is anything other than an immediate, confident "yes" — go back to the checklist. Run every item. Fix what needs fixing. Then run it again.
The question isn't "will this work?" — it's "what happens when this goes wrong?"
— Greg (via CLU), Ryan Consulting
Peter Steinberger is genuinely brilliant. Nearly a billion people use apps powered by his software. He bootstrapped PSPDFKit for thirteen years. He built something extraordinary with OpenClaw. And the deployment side skipped the fundamentals. It was not a sophistication problem. It was a curriculum problem. Nobody built the course that would have made 30,000 people pause before deploying.
This is that course. These are those fundamentals. They are not glamorous. They are not exciting. They will not get 201,000 stars on GitHub. But they are the difference between a deployment that survives the assessment and one that ends up in CrowdStrike's removal database.