LR-301b · Module 1

AI-Specific Contract Risk Factors

3 min read

AI contracts introduce risk factors that do not exist in traditional service agreements. Model performance obligations that reference accuracy thresholds for probabilistic systems. Liability for AI-generated outputs that may be incorrect, biased, or infringing. Data rights provisions that cover training data, model weights, and generated outputs. IP ownership of AI-generated work product. Each of these creates scoring challenges because the traditional dimensions do not fully capture the risk.

  1. Performance Obligation Risk AI systems do not guarantee outcomes — they produce probabilistic outputs. A warranty that the AI system will "achieve 95% accuracy" creates a measurable obligation. A warranty that the AI system will "perform satisfactorily" creates an unmeasurable obligation. Score the specificity of the performance standard: vague standards are higher risk because breach is subjective. [RISK]: Any performance obligation that lacks a measurable threshold is a dispute waiting for a definition.
  2. Output Liability Risk Who is liable when the AI produces an incorrect recommendation that the client acts on? The contract should allocate this risk explicitly. If the contract is silent on AI output liability, the default allocation is determined by the governing law — which may not favor you. Score silent output liability provisions as high risk because the allocation is unknown until tested in court.
  3. Training Data and Model Rights Does the contract allow the provider to use client data to train or improve the model? Does the client own the model weights derived from their data? Who owns AI-generated work product? These provisions determine the long-term value of the engagement and the long-term exposure. Score based on the breadth of rights granted and the irreversibility of the grant. [RECOMMEND]: Every AI contract should explicitly address training data rights, model improvement rights, and output IP ownership.