CC-301d · Module 3
PR Review Workflows
4 min read
Automated PR review is most effective when it focuses on what AI does well and defers what it does poorly. Claude excels at: detecting type inconsistencies across files, identifying unused imports and dead code, catching error handling gaps (try/catch blocks that swallow errors silently), finding naming convention violations, and verifying that changes align with architectural patterns documented in CLAUDE.md. Claude struggles with: evaluating whether the overall approach is the right one, judging performance implications without profiling data, and understanding business requirements that are not documented in the codebase.
This asymmetry defines the division of labor. The automated review handles the mechanical checks — the things that are tedious for humans but straightforward for AI. The human review handles the judgment calls — the things that require domain expertise, business context, and architectural intuition. Together, they produce a more thorough review than either could achieve alone.
Review comment formatting determines whether developers actually read and act on the feedback. The best practice is structured comments with severity levels. Critical comments (potential bugs, security issues, data loss risks) should be clearly marked and demand attention. Suggestions (style improvements, refactoring opportunities, alternative approaches) should be clearly optional. Informational comments (explanations, documentation links, context) should be non-blocking.
The CLAUDE.md is your leverage point for review quality. Add a review section that specifies what the automated reviewer should focus on: "When reviewing PRs, check for: (1) TypeScript strict mode violations, (2) missing error boundaries in React components, (3) API endpoints without input validation, (4) components over 200 lines that should be split." These instructions compound — every PR gets reviewed against the same criteria, and the criteria improve as you discover new patterns to check for.
Do This
- Focus automated review on type safety, error handling, and pattern compliance
- Add review criteria to CLAUDE.md so checks compound over time
- Use severity levels: Critical (must fix), Suggestion (optional), Info (context)
- Let humans handle architecture, performance, and business logic review
Avoid This
- Rely on automated review for architectural or performance judgments
- Post dozens of nitpick comments that overwhelm the developer
- Skip CLAUDE.md review instructions — generic review is less useful
- Auto-approve PRs based solely on automated review passing