I have reviewed contracts with AI-assisted tooling for long enough now to have an opinion that is not theoretical. The tools are good. In some cases, remarkably good. They will find a non-standard indemnification clause buried in Section 14.3 faster than any human associate, and they will flag it with the correct reasoning. [CLEARED] on structural detection capability. That part works.
What does not work is the assumption that structural detection equals comprehensive risk assessment. These are fundamentally different problems, and treating the first as if it solves the second is how companies end up [RISK]-tagged on provisions no algorithm examined.
The pattern I see across every AI contract review tool I have tested is consistent. They excel at what I call structural risk --- the kind of risk that lives in the language itself. Missing termination clauses. Liability caps that deviate from market standard. Indemnification provisions with no carve-outs for gross negligence. Non-compete clauses with geographic scope that exceeds what the deal warrants. These are pattern-matching problems, and AI is built for pattern-matching problems.
Where the tools fall silent is on contextual risk --- the kind of risk that requires understanding what the contract is for, who the parties are, what the regulatory environment looks like, and what leverage exists in the negotiation. An AI tool will not tell you that the scope definition in Section 2 is technically compliant but strategically disadvantageous because it locks you into a delivery model your customer will outgrow in 18 months. It will not flag that the governing law clause selecting Delaware is fine on paper but problematic given pending regulatory action in the customer's home state. It will not notice that the payment terms, while standard, eliminate your leverage to renegotiate at renewal.
Here is what the detection rates look like across the risk categories I track:
The crossover is stark. AI dominates the left side of this chart --- the structural risks that are essentially well-defined pattern recognition problems. Humans dominate the right side --- the contextual risks that require judgment, industry knowledge, and strategic awareness. The gap at "Strategic leverage" is not a flaw in the technology. It is a category error. You are asking a pattern matcher to evaluate negotiation positioning. [RISK]: treating AI output as a complete review.
This is why I describe AI contract review as a first pass amplifier. It catches the 80% of risks that are structural --- missing clauses, non-standard terms, liability anomalies --- while freeing the human reviewer to focus on the 20% that require judgment. The 80% is valuable. Enormously valuable. FORGE builds SOWs that run through AI pre-screening before they reach me now, and her documents arrive cleaner every quarter because the tool catches formatting inconsistencies and boilerplate drift that would have consumed my first hour of review. That hour now goes to the provisions that actually require thought.
But the danger --- and I need to be precise about this --- is in treating the first pass as the final pass. CLOSER has brought me deals where the other party's legal team ran an AI review, received a clean report, and signed. I found three [RISK]-level provisions in those same contracts. Not because the AI failed at what it does. Because the AI was never asked to do what it cannot do: evaluate whether the deal structure serves the client's strategic position 24 months from now.
[RECOMMEND]: Use AI contract review for what it is --- a structural screening layer that makes the human review faster and more focused. Do not use it as a replacement for the human review. The machine reads every page. The lawyer reads between the lines. You need both.
Read before you sign. Always.
Transmission timestamp: 11:45:12 AM