Code Review

Code Review Checklist

A structured list of criteria that reviewers verify during code review, covering correctness, security, performance, readability, and test coverage.

What Is a Code Review Checklist?

A code review checklist is a predefined set of criteria that a reviewer works through when examining a pull request. Rather than relying on memory or intuition, the reviewer uses the checklist to systematically evaluate the change against categories like correctness, security, performance, readability, maintainability, and test coverage. The goal is to ensure consistent review quality regardless of who is reviewing or how much time pressure they are under.

Checklists are a well-established technique for reducing errors in high-stakes fields. Surgeons use pre-operative checklists. Pilots use pre-flight checklists. Software code review checklists serve the same purpose: they compensate for the limits of human attention by converting implicit knowledge (“I should check for SQL injection”) into explicit, repeatable steps.

A good code review checklist is not a bureaucratic form to fill out. It is a concise reference that guides the reviewer’s attention to the most important aspects of the change. Teams typically customize their checklist to reflect their specific technology stack, risk profile, and past incidents. A team that has experienced production outages from unhandled null values will include null-safety as an explicit checklist item. A team handling financial data will have checklist items specific to audit logging and data integrity.

How It Works

A typical code review checklist is organized into categories. Here is an example that covers the most common dimensions:

## Code Review Checklist

### Correctness
- [ ] Does the code do what the PR description says it does?
- [ ] Are edge cases handled (empty inputs, nulls, boundary values)?
- [ ] Is error handling appropriate (no swallowed exceptions)?
- [ ] Are race conditions or concurrency issues addressed?

### Security
- [ ] Is user input validated and sanitized?
- [ ] Are SQL queries parameterized (no string interpolation)?
- [ ] Are authentication and authorization checks in place?
- [ ] Is sensitive data (passwords, tokens, PII) handled safely?

### Performance
- [ ] Are there unnecessary database queries (N+1 problem)?
- [ ] Are expensive operations cached where appropriate?
- [ ] Will this change scale under expected load?
- [ ] Are there potential memory leaks?

### Readability
- [ ] Are variable and function names descriptive?
- [ ] Is the code structured in a way that is easy to follow?
- [ ] Are complex sections documented with comments?
- [ ] Does the code follow the team's style conventions?

### Testing
- [ ] Are there tests for the new or changed behavior?
- [ ] Do tests cover edge cases, not just the happy path?
- [ ] Are tests readable and maintainable?
- [ ] Do all existing tests still pass?

### Maintainability
- [ ] Is the code DRY without being overly abstracted?
- [ ] Are dependencies appropriate (no unnecessary libraries)?
- [ ] Will future developers understand this code in 6 months?

Teams integrate checklists into their workflow in different ways. Some add the checklist as a pull request template so that it appears automatically when a new PR is created. Others use it as a mental framework during review. AI code review tools like CodeRabbit can be configured to evaluate PRs against a team’s checklist automatically, flagging items that need human attention.

Why It Matters

Without a checklist, review quality depends entirely on the individual reviewer’s experience, attention, and current cognitive load. A senior developer reviewing a PR on Monday morning after a good night’s sleep will catch different issues than the same developer reviewing at 4 PM on Friday. A checklist provides a consistent baseline that does not fluctuate with human factors.

Research consistently supports this. A study at Cisco Systems found that teams using structured review checklists caught 14% more defects per review session than teams doing ad-hoc review. Google’s engineering practices documentation explicitly recommends checklists as a mechanism for ensuring review thoroughness, particularly for security-sensitive changes.

Checklists also accelerate the onboarding of new reviewers. Junior developers joining a team often do not know what to look for in a review. A checklist gives them a concrete starting point, transforming code review from an intimidating open-ended task into a structured, learnable process.

Perhaps most importantly, checklists create institutional memory. When a production incident reveals a class of bugs that reviews should have caught (for example, missing input validation on a public API), the team adds that category to the checklist. Over time, the checklist becomes a living document that encodes the team’s hard-won lessons.

Best Practices

  1. Keep the checklist concise. A checklist with 50 items will be ignored. Aim for 15-25 items grouped into clear categories. If a particular category grows too large, spin it off into a specialized checklist for high-risk areas (e.g., a security-specific checklist for auth changes).

  2. Embed the checklist in your PR template. If reviewers have to go find the checklist, they will skip it. Add it directly to your GitHub or GitLab pull request template so that it appears every time a PR is created.

  3. Customize for your stack and history. Generic checklists are a good starting point, but the real value comes from items specific to your codebase. If your team has had issues with timezone handling, add a checklist item for it. If your API requires backward compatibility, make that explicit.

  4. Automate what you can. Items like “Does the code follow style conventions?” and “Do all tests pass?” should be enforced by CI, linters, and formatters — not by human reviewers. Reserve the checklist for judgment calls that require human reasoning.

  5. Review and update the checklist quarterly. As the codebase evolves, some checklist items become irrelevant while new concerns emerge. Schedule a periodic review of the checklist to keep it current and useful.

Common Mistakes

  1. Treating the checklist as a compliance exercise. If reviewers mechanically check every box without actually verifying the code, the checklist becomes theater. The checklist should guide attention, not replace thought. Reviewers should be expected to explain what they checked, not just tick boxes.

  2. Making the checklist too generic. A checklist that says “check for bugs” adds no value. Each item should be specific enough that a reviewer knows exactly what to look for. “Are SQL queries parameterized?” is actionable. “Is the code correct?” is not.

  3. Never updating the checklist after incidents. The most valuable checklist items come from real production failures. If a post-mortem reveals that a class of vulnerability was not caught in review, that is a signal to add a checklist item — failing to do so means the team will keep making the same mistakes.

Related Terms

Learn More

Tool Reviews

Related Articles

Free Newsletter

Stay ahead with AI dev tools

Weekly insights on AI code review, static analysis, and developer productivity. No spam, unsubscribe anytime.

Join developers getting weekly AI tool insights.