Code-review prompts (security / perf / readability)
The one-line verdict
The end of every review prompt should force a verdict. Without one, you get a list of issues with no overall stance — useful information, but not a recommendation. The verdict is the difference between "here's what I noticed" and "here's what to do."
Three verdicts cover almost every review:
| Verdict | When | What the reviewer does next |
|---|---|---|
| APPROVE | No HIGH or MED findings; LOW issues are optional | Merge |
| REQUEST_CHANGES | MED findings that should land before merge | Author fixes, re-requests review |
| BLOCK | HIGH findings (security, data loss, correctness) | Author cannot merge until resolved |
Look at how the captured review from lesson 1 used these:
Verdict: BLOCK — Critical SQL injection vulnerability must be fixed before merge.
Captured from Claude Sonnet 4.5 (claude-sonnet-4-5) on 2026-04-27. Re-runs may differ slightly.
One word, one sentence of justification. The reviewer reading this knows immediately what action to take. The justification gives them a one-line message they can paste into the PR comment.
The discipline of forcing a verdict has a useful second-order effect: it makes the model integrate its findings before answering. When you ask "list issues + give a verdict" instead of "list issues," the model has to weigh the issues against each other to decide the verdict. That weighting often surfaces priorities the model wouldn't have surfaced otherwise. A SQL injection isn't just one of three findings — it's the finding that drives the verdict. The verdict line forces that hierarchy.
The verdict decision tree, mapped from findings to action:
A good prompt for review at the team level extends the verdict with one more constraint:
End with a 1-line verdict in this exact format:
Verdict: APPROVE— or —Verdict: REQUEST_CHANGES (M severity findings)— or —Verdict: BLOCK (<one-sentence reason>)
The format constraint ensures every review you do is parseable. If you're feeding reviews into a CI bot or a stats dashboard, you can grep for ^Verdict: and aggregate. If you're just reading them, the consistent shape lets your eye find the verdict instantly.
A subtler version: ask the model to commit to a verdict first, then justify it. The standard order is "findings, then verdict." The reverse — "verdict, then findings" — is sometimes more honest because the model isn't padding the findings list to justify the verdict it's already decided. Try both on the same code; you'll find one shape feels more truthful for your team's bar.
When you disagree with the verdict, that's signal. The model has surfaced its reasoning and you can override it explicitly. Override looks like a comment on the PR: "Model flagged BLOCK on SQL injection; the call site is internal-only and trusted. Downgrading to LOW." Your override is now part of the review record.
The point of the verdict is not to outsource the decision. It is to make the decision visible — yours or the model's — so it can be debated, refined, or overridden in the open. That's how reviews scale to a team.
Module 5 next: tool-specific prompt styles for Cursor, Aider, Copilot, and Claude Code. :::
Sign in to rate