CW-201c · Module 3
Governance & Measurement
4 min read
When one person uses Claude, governance is simple — they decide what is appropriate. When twenty people use Claude, you need shared rules about what Claude can and cannot be used for, what quality standards apply, and how to handle sensitive information.
Governance does not mean bureaucracy. It means three things. First: data classification. What data can be shared with Claude? Public information, internal analyses, and anonymized datasets are generally safe. Client proprietary data, PII, financial records subject to audit, and anything covered by NDA requires a decision from your legal or compliance team. The classification should be simple — green (share freely), yellow (share with caution, strip identifiers), red (never share). Anything more complex will not be followed.
Second: quality gates. Not every Claude output needs the same level of review. Internal brainstorming gets no review — it is thinking out loud. Internal deliverables (team reports, project plans) get peer review. External deliverables (client proposals, published content, financial reports) get formal QA through the team's QA skills and a human sign-off. The quality gate should match the blast radius — how many people will see this, and what happens if it is wrong?
Third: measurement. You need to know whether your investment in Claude is producing returns. The metrics that matter are not "number of sessions" or "tokens consumed." They are: time saved per deliverable (measure before and after), deliverable quality scores (track QA pass rates over time), and team adoption depth (not just how many people use Claude, but how many use it for substantive work versus trivial queries). Track these quarterly. If time savings are not materializing, the problem is in the skills and prompts, not in the tool.
Do This
- Classify data as green, yellow, or red — simple enough for everyone to remember
- Match quality gates to blast radius — internal brainstorming needs no review, client deliverables need QA
- Measure time saved per deliverable, not sessions or tokens — those are cost metrics, not value metrics
- Review governance quarterly and simplify anything that is being circumvented
Avoid This
- Create a 20-page AI usage policy that nobody reads — keep it to one page
- Require the same review process for an internal brainstorm as a client proposal
- Measure adoption by license count — a license that nobody uses is not adoption
- Treat governance as permanent — it should evolve as the team's comfort and skill with Claude grows