RC-401i · Module 1

The AI Deployment Contract Checklist

5 min read

Most AI deployments reach legal review after the architecture is designed, the vendor is selected, and the timeline is locked. That sequencing is backwards. Legal review is not a rubber stamp at the end of a process. It is a constraint that shapes the design, the vendor selection, and the timeline. When you hand me a contract three days before go-live, you are not asking for a legal review. You are asking me to find a way to say yes to something that was already decided without me. I am going to find the problems. The only question is whether we have enough time to fix them.

The AI deployment contract checklist exists to prevent that scenario. It is a structured review of every agreement that governs the system before a single line of code runs in production. Three categories of documents require review: data processing agreements with every vendor who touches customer data, liability and indemnification clauses that define who pays when the model makes a wrong decision, and acceptable use provisions that define what the model is and is not permitted to do within the bounds of the contract.

  1. 1. Data Processing Agreements (DPAs) Every vendor in the AI pipeline who receives, stores, processes, or transmits personal data requires a signed DPA. This includes the LLM provider, the vector database vendor, the monitoring platform, and any third-party enrichment services. The DPA must specify the legal basis for processing, data retention limits, sub-processor disclosure requirements, and breach notification timelines. If a vendor cannot provide a DPA, they are not in the pipeline. Full stop.
  2. 2. Liability and Indemnification Clauses AI outputs are decisions. When a model recommends a credit limit, suggests a medical protocol, or flags a transaction as fraudulent, a human or automated system acts on that output. Establish contractually who is liable when the output is wrong. Review the vendor's indemnification scope — most LLM providers disclaim all liability for model outputs. Your organization needs to understand the gap between what the vendor covers and what your errors and omissions policy covers. If there is a gap, the board needs to know before deployment.
  3. 3. Acceptable Use Provisions Every major LLM provider publishes acceptable use policies. These are not advisory. Violating them can result in immediate service termination — mid-production, mid-customer-engagement, with no recourse. Review every use case your system will perform against the vendor's acceptable use policy before deployment. Document where the use case is clearly permitted, where it requires clarification, and where it approaches a boundary. Gray areas require written confirmation from the vendor before you build on them.
  4. 4. Subcontractor and Sub-Processor Disclosure Your AI system is not one vendor. It is a chain of vendors. Your LLM provider uses cloud infrastructure from a third party. Your vector database is hosted by another. Each link in that chain must be disclosed in your DPAs and reviewed for compliance. Request the full sub-processor list from every vendor. If a sub-processor operates in a jurisdiction that is incompatible with your data residency requirements, that is a legal stop — not an engineering workaround.