Skip to content

Review Patterns

Traditional code review doesn’t scale to agentic output. If agents produce 10x the code, line-by-line review becomes a bottleneck. The solution is shifting review upstream and leveraging agents for review.

/\
/ \ Research
/ ×× \ 1 bad line = 1000s of bad code lines
/______\
/ \ Plan
/ ××××× \ 1 bad line = 100s of bad code lines
/____________\
/ \ Code
/ ×××××××××× \ 1 bad line = 1 bad line
/__________________\

Focus review effort at the highest leverage point: research and planning, not code.

Before the agent plans anything, validate the research:

  • Are the findings accurate?
  • Are key dependencies identified?
  • Are there alternatives the research missed?
  • Is the scope appropriate?

Time investment: 5-10 minutes Impact: Prevents fundamentally wrong approaches

Before the agent writes code, validate the plan:

  • Does the approach fit the architecture?
  • Are the steps ordered correctly?
  • Are verification criteria sufficient?
  • Are there missing edge cases?

Time investment: 10-15 minutes Impact: Prevents structural mistakes across the entire implementation

After implementation, validate the code:

  • Does it match the plan?
  • Do tests cover the requirements?
  • Are there security concerns?
  • Does it follow existing patterns?

Time investment: Varies by size Impact: Catches implementation bugs

Use separate agents for writing and reviewing:

# Session A: Writer
Implement the rate limiter following the plan.
# Session B: Reviewer (fresh context, unbiased)
Review the rate limiter implementation at src/middleware/rateLimit.ts.
Check for: race conditions, edge cases, security issues,
consistency with existing middleware patterns.
Provide specific file:line references for each finding.

The reviewer runs in a fresh context — it’s not biased by the implementation decisions.

Define specialized reviewers in your agent’s agents/ folder (see Tool Configuration Reference for exact paths):

agents/security-reviewer.md
---
name: security-reviewer
description: OWASP-focused security review
tools: Read, Grep, Glob
# use a cost-efficient model
---
Check for OWASP Top 10 vulnerabilities.
agents/perf-reviewer.md
---
name: perf-reviewer
description: Performance-focused code review
tools: Read, Grep, Glob, Bash
# use a cost-efficient model
---
Check for: N+1 queries, missing indexes, unbounded loops,
memory leaks, unnecessary allocations.

For critical code, use multiple specialized reviewers:

Run these reviews in parallel:
1. Use the security-reviewer agent to check for vulnerabilities
2. Use the perf-reviewer agent to check for performance issues
3. Use a sub-agent to verify all test scenarios from the spec are covered
Synthesize the findings into a single review summary.
CategoryCheck
CorrectnessDoes it match the spec/plan?
Edge casesAre boundary conditions handled?
TestsDo tests cover all specified scenarios?
SecurityAny injection, auth, or data exposure risks?
PerformanceAny N+1 queries, unbounded operations, or leaks?
PatternsDoes it follow existing codebase conventions?
DependenciesAny unnecessary new dependencies added?
ScopeDoes it only change what was specified?