Code Review Practices That Work
Code review is where theory meets practice. It's where teams align on quality, share knowledge, and catch bugs before they ship. Done well, it's invaluable. Done poorly, it's a bureaucratic nightmare that slows everyone down.
We've iterated on our code review process for years. Here's what we've learned.
Why Code Reviews Matter
Before the how, the why:
Bug prevention: A second pair of eyes catches issues the author missed. Not all bugs, but enough to matter.
Knowledge sharing: Reviews spread understanding of the codebase. When you review someone's work in an area you don't know well, you learn it.
Consistency: Reviews enforce standards. Not through documentation that no one reads, but through active feedback on real code.
Collective ownership: When multiple people review code, multiple people understand it. No more single points of failure.
What to Review For
Not everything deserves equal attention. Prioritize your review time:
High Priority
- Logic errors: Does the code actually do what it's supposed to do?
- Security issues: Input validation, authentication checks, data exposure
- Performance problems: N+1 queries, unnecessary computations, memory leaks
- Missing edge cases: What happens with empty input? Null values? Network failures?
Medium Priority
- Architecture decisions: Does the structure make sense? Will it scale?
- Test coverage: Are the important paths tested?
- Naming: Can you understand what things do from their names?
Low Priority
- Style nitpicks: These should be handled by linters, not humans
- Personal preferences: Different isn't wrong
How to Give Feedback
The way you communicate matters as much as what you communicate.
Be Specific
Bad: "This is confusing."
Good: "The variable name temp doesn't indicate what it stores. Consider pendingUserUpdates."
Explain the Why
Bad: "Move this function to a separate file."
Good: "This file is getting long and handles multiple concerns. Moving the validation logic to validators.ts would make both files easier to navigate."
Offer Alternatives
Bad: "Don't do it this way."
Good: "This approach has O(n²) complexity. Using a Set for lookup instead of array.includes() would make it O(n)."
Distinguish Requirements from Suggestions
Prefix optional suggestions with "nit:" or "optional:". Make it clear what's blocking and what's a thought for consideration.
Acknowledge Good Work
Point out clever solutions, well-written tests, and clean implementations. Reviews shouldn't only be about problems.
How to Receive Feedback
The author's attitude shapes review culture as much as the reviewer's.
Assume good intent: The reviewer wants to help, not attack. Read feedback as collaboration, not criticism.
Ask clarifying questions: If feedback is unclear, ask for specifics. Don't guess what the reviewer meant.
Explain your reasoning: If you disagree, explain why. "I considered that approach but chose this because..." opens dialogue.
Don't take it personally: The code is reviewed, not you. Separate your identity from your output.
Process Guidelines
Keep PRs Small
The most important rule. Large PRs get rubber-stamped because no one has time to really review them. Target under 400 lines of changes. If a feature is bigger, break it into logical chunks.
Provide Context
Good PR descriptions include:
- What the change does
- Why it's needed (link to issue/ticket)
- How to test it
- Anything the reviewer should pay special attention to
Review Promptly
Blocking on reviews kills momentum. Our target: first review within 4 hours during business hours. Not always possible, but that's the goal.
Limit Reviewers
Two reviewers is usually enough. More leads to diffusion of responsibility. One person who knows the area well plus one who doesn't is a good mix.
Use Labels and Status
Make PR state clear. "Draft" for work in progress. "Ready for review" when it's actually ready. "Changes requested" and "Approved" to track progress.
Automation
Don't review what machines can check:
- Formatting: Prettier runs on commit
- Linting: ESLint runs in CI
- Type errors: TypeScript catches them
- Test failures: CI runs before review
- Security scanning: Automated tools flag known vulnerabilities
By the time a human reviews, the easy stuff should already be clean.
AI in Code Review
GitHub Copilot and similar tools now offer automated PR summaries and suggestions. Our take: they're useful as a first pass, not a replacement.
AI catches patterns it's seen before. It misses novel bugs, business logic errors, and architectural mismatches. Use AI assistance as a supplement to human review, not a substitute.
Common Antipatterns
Rubber stamping: Approving without actually reading. This defeats the purpose.
Gatekeeping: Blocking PRs over trivial preferences. This slows the team and breeds resentment.
Review bombing: Dumping 50 comments at once. Break them into blocking issues and suggestions. Don't overwhelm.
Delayed reviews: Letting PRs sit for days. Context is lost, merge conflicts accumulate.
Missing the forest for the trees: Fixating on style while missing logic errors. Prioritize what matters.
Building the Culture
Good code review is cultural, not procedural. Some things that help:
- Senior devs model constructive feedback
- Everyone reviews, not just leads
- Reviews are prioritized like other work
- Retrospectives include discussion of review quality
- Appreciation when someone catches a real bug
The goal is a team where everyone wants their code reviewed because they trust the feedback will be valuable and the process will be respectful.
Start Somewhere
If your current reviews are chaotic, pick one thing to improve. Maybe it's smaller PRs. Maybe it's faster turnaround. Maybe it's better descriptions. Improve incrementally.
Good code review is a skill. Like any skill, it develops with practice and intention.