LR-101 · Module 2

Terms That Actually Matter

3 min read

Every contract has dozens of provisions. In AI engagements, five of them create ninety percent of the risk. I have reviewed hundreds of AI-related agreements, and the pattern is consistent: organizations spend their negotiation capital on payment terms and delivery timelines while leaving the five provisions that actually determine liability exposure untouched. Here is what to focus on.

  1. 1. Indemnification Who pays when things go wrong. In AI contracts, this must address: model output errors, IP infringement by AI-generated content, data breaches involving AI-processed information, and regulatory non-compliance. If the indemnification clause does not mention AI-specific scenarios, it was written for a different kind of engagement.
  2. 2. Intellectual Property Rights Who owns what the AI creates. This is genuinely unsettled law. Your contract must specify: ownership of AI-generated deliverables, rights to training data and fine-tuned models, license scope for underlying AI models, and what happens to client data used in AI processing. Ambiguity here is a lawsuit waiting for a trigger.
  3. 3. Data Rights & Privacy What happens to the data the AI touches. Must cover: data retention and deletion obligations, restrictions on using client data for model training, compliance with applicable privacy regulations, and breach notification procedures specific to AI data processing. "We handle data responsibly" is not a contract provision.
  4. 4. Limitation of Liability The ceiling on financial exposure. Standard software liability caps may be inadequate for AI because the potential harm from AI errors scales differently. Consider: separate caps for AI-specific damages, carve-outs for gross negligence or willful misconduct, and whether consequential damages exclusions adequately address AI failure scenarios.
  5. 5. Termination & Transition What happens when the engagement ends. AI engagements create unique exit risks: model dependencies, data extraction requirements, transition of AI-dependent workflows, and the status of fine-tuned models and training data upon termination. If the exit is not clean in the contract, it will not be clean in practice.