LR-201a · Module 1

The AI Review Workflow

4 min read

A contract review without AI takes six to eight hours for a standard MSA. A contract review with AI takes two to three hours. The time savings are real, but they are not the point. The point is coverage. A human reviewer fatigues. Attention degrades after page twenty. The clause on page thirty-seven that mirrors the problematic provision on page twelve — the one that creates a circular indemnification obligation — gets missed because the human brain is not optimized for cross-referencing forty pages of dense legal text under a Thursday deadline.

AI does not fatigue. It does not lose focus on page thirty-seven. It can hold the entire document in context simultaneously and identify patterns that a human reviewer would need to flag manually across multiple passes. The AI review workflow is not a replacement for human judgment. It is an augmentation that ensures the human reviewer receives the contract pre-diagnosed — provisions categorized, risk indicators flagged, cross-references mapped — so that the human attention goes to interpretation and decision-making, not discovery.

  1. Stage 1: Document Ingestion The AI receives the full contract, identifies document structure — articles, sections, definitions, exhibits — and creates an internal map of provision relationships. This is not summarization. It is structural analysis. The AI knows which definitions are referenced in which provisions and which provisions reference each other.
  2. Stage 2: Clause Classification Every substantive provision is classified by type: indemnification, limitation of liability, IP assignment, data rights, termination, warranty, confidentiality, force majeure, and others. Classification is the foundation for risk scoring — you cannot assess risk for a provision you have not identified.
  3. Stage 3: Risk Flagging The AI applies risk indicators to each classified provision based on known patterns — uncapped indemnification, broad IP assignment, one-sided termination rights, silent data retention. These flags are not verdicts. They are attention directors that tell the human reviewer where to focus.
  4. Stage 4: Human Review and Annotation The human reviewer receives the pre-diagnosed contract and applies [RISK], [REDLINED], [RECOMMEND], or [CLEARED] annotations informed by the AI analysis but driven by professional judgment. The AI found the provisions. The human decides what to do about them.