LR-101 · Module 1

Liability in the AI Age

3 min read

When the AI makes a mistake, who pays? This is not a philosophical question. It is a contract question, and if your contract does not answer it clearly, a court will answer it for you — and you will not enjoy the process. The liability chain in AI engagements involves at minimum three parties: the AI provider (who built the model), the operator (who deployed and configured it), and the client (who uses the output). Traditional software liability was relatively simple: if the software has a bug, the vendor is responsible. AI complicates this because the "bug" might be emergent behavior that no party specifically caused.

Consider a scenario. You deploy an AI assistant for a client. The assistant generates a recommendation that the client acts on. The recommendation turns out to be wrong, and the client suffers a financial loss. Who is liable? The model provider, because the model generated the wrong output? You, because you configured and deployed it? The client, because they acted on AI output without verification? The honest answer is: it depends entirely on what the contracts say. And if the contracts are silent on AI-specific liability allocation, every party is exposed.

Do This

  • Explicitly allocate AI-specific liability in every engagement contract
  • Define who is responsible for AI output accuracy — and what "accuracy" means in context
  • Include AI output disclaimer language: the AI assists, it does not decide
  • Require human review of AI outputs for high-stakes decisions

Avoid This

  • Leave liability allocation to general-purpose warranty language
  • Assume the AI provider's terms of service protect you — they protect the provider
  • Promise that AI outputs will be "accurate" without defining accuracy thresholds
  • Let clients believe AI recommendations are equivalent to professional advice