What Is Code Review?
Learn what code review is, why every engineering team does it, and how it improves code quality, catches bugs early, and accelerates developer growth.
15 min read
Definition and Core Concepts
Code review is the systematic examination of source code by one or more developers other than the original author, performed before the code is merged into a shared codebase. The goal is to identify defects, improve code quality, enforce consistency, and share knowledge across a team. It is one of the oldest and most effective quality assurance practices in software engineering, and virtually every professional engineering organization practices some form of it.
At its simplest, code review answers a deceptively important question: does another developer agree that this change is correct, clear, and ready to ship?
The mechanics vary. In some teams, a reviewer sits with the author and walks through the changes together. In most modern teams, the process happens asynchronously through pull requests on platforms like GitHub, GitLab, or Bitbucket. The author submits their changes, one or more reviewers read the diff, leave comments, and eventually approve the merge. But regardless of the mechanics, the core concepts remain the same:
- Correctness. Does the code do what it claims to do? Are there bugs, edge cases, or off-by-one errors?
- Design. Does the change fit the architecture of the system? Is it solving the right problem in the right place?
- Readability. Can a developer unfamiliar with this code understand it six months from now?
- Maintainability. Will this code be easy to modify, extend, and debug in the future?
- Security. Does the change introduce vulnerabilities? Are inputs validated? Are secrets handled correctly?
Code review is not about gatekeeping. It is not about proving you are smarter than the author. And it is not a rubber stamp where you click “approve” without reading the diff. Done well, it is a collaborative conversation between professionals who share a common goal: shipping reliable software.
A Brief History of Code Review
Code review did not start with GitHub pull requests. It has been around nearly as long as professional software development itself.
In the 1970s, Michael Fagan at IBM developed Fagan inspections, a rigorous, multi-stage review process where teams of developers methodically walked through code together, tracking defects in a formal log. Fagan’s research showed that these inspections could catch 60-90% of defects before testing even began. The process was highly effective, but it was also expensive. A typical Fagan inspection required multiple meetings, a moderator, a reader, an author, and several inspectors. Scheduling alone was a bottleneck.
Through the 1980s and 1990s, lighter-weight approaches emerged. Over-the-shoulder reviews (where a developer simply asks a colleague to look at their screen and walk through the changes) became common in teams that found formal inspections too heavy. These reviews were faster and more informal, but they lacked documentation and consistency.
The real transformation came with the rise of version control systems and web-based code hosting platforms. Collaborator (formerly SmartBear Code Collaborator), released in the mid-2000s, was one of the first commercial tools designed specifically for asynchronous code review. Google built its internal review tool, Critique, which enforced mandatory review on every change. Facebook used Phabricator, which later became open source. Each of these tools formalized the idea that code should be reviewed asynchronously, with comments attached directly to specific lines.
When GitHub launched pull requests in 2008 and made them central to the open source workflow, asynchronous, tool-assisted code review became the industry default. Today, the pull request workflow is so ubiquitous that many developers have never experienced any other form of code review.
The latest evolution is AI-assisted code review, where tools like CodeRabbit and GitHub Copilot use large language models to automatically analyze pull requests, flag issues, and suggest fixes. If you want to understand how this works in detail, read our guide on what AI code review is and how it works.
Why Code Review Matters
The evidence for code review is overwhelming. Studies from IBM, Microsoft, Google, and SmartBear consistently show that code review is one of the most cost-effective quality assurance practices available. But the benefits extend far beyond catching bugs.
Bug Detection
The most obvious benefit of code review is catching bugs before they reach production. A second pair of eyes spots logic errors, off-by-one mistakes, unhandled edge cases, and incorrect assumptions that the original author overlooked.
Consider this JavaScript function that is supposed to find the oldest user in an array:
function findOldestUser(users) {
let oldest = users[0];
for (let i = 0; i < users.length; i++) {
if (users[i].age > oldest.age) {
oldest = users[i];
}
}
return oldest.name;
}
This code works fine in the happy path. But a reviewer would spot two issues: first, if users is an empty array, users[0] is undefined, and accessing undefined.age throws a TypeError. Second, the function returns oldest.name, but the function name suggests it should return the user object, not just the name, which is a likely source of confusion for callers.
A study by SmartBear analyzing over 2,500 code reviews found that reviewers catch an average of 14 defects per review session. Microsoft Research found that code review catches 15% of all bugs, a significant number considering it requires no test infrastructure.
Knowledge Sharing
Code review is one of the most effective knowledge-sharing mechanisms on a software team. When a developer reviews code in a part of the codebase they have never worked in, they learn how that system works. When a senior developer reviews a junior’s code, the junior learns better patterns and practices. When anyone reads a well-written pull request description, they understand the reasoning behind a design decision.
This knowledge sharing has compounding benefits. It increases the team’s bus factor, meaning the number of people who can leave a team before work stalls. It reduces knowledge silos, where only one person understands a critical system. And it creates a culture where developers feel comfortable asking questions and learning in the open.
Google’s engineering culture explicitly treats code review as a mentorship tool. New engineers are expected to learn coding standards, architectural patterns, and institutional knowledge through the review process itself, not just through documentation.
Code Quality and Consistency
Code review enforces standards that no linter can fully capture. A linter can check indentation and naming conventions, but it cannot tell you whether a function name actually describes what it does, whether the abstraction boundary is in the right place, or whether this approach will create maintenance problems in six months.
Reviews create a natural pressure toward consistency. When developers know their code will be read by others, they write clearer variable names, add meaningful comments, structure their logic more carefully, and think harder about their design choices. This is not just anecdotal. Research from Microsoft and Google consistently shows that codebases with mandatory code review exhibit higher consistency and lower defect density than those without.
Onboarding
For new team members, code review is an accelerated onboarding experience. By reviewing the code of experienced teammates, a new developer learns the codebase’s conventions, patterns, and architecture far faster than by reading documentation alone. And by having their own code reviewed, they get direct, contextual feedback on how to improve.
Many teams deliberately assign new hires as reviewers early on, even if their feedback is limited to questions rather than suggestions. The act of reading code and trying to understand it builds familiarity with the codebase more effectively than any onboarding document. For practical guidance on getting started as a reviewer, see our chapter on how to review a pull request.
Types of Code Review
Not all code review looks the same. Over the decades, several distinct approaches have emerged, each with different trade-offs between thoroughness, speed, and overhead.
Formal Inspection
Formal inspections, like the Fagan inspection described earlier, involve a structured, multi-phase process: planning, overview, preparation, inspection meeting, rework, and follow-up. Roles are explicitly defined: moderator, author, reader, and inspectors. Defects are logged in a formal tracking system.
Formal inspections are thorough. They catch the most defects per session of any review method. But they are also slow and expensive. The overhead of scheduling meetings, assigning roles, and documenting findings makes them impractical for most modern software teams, where dozens of pull requests may be opened in a single day. Today, formal inspections are primarily used in safety-critical domains like aviation software, medical devices, and nuclear systems, where the cost of a defect in production is catastrophic.
Over-the-Shoulder Review
Over-the-shoulder review is the most informal approach. The author asks a colleague to come look at their screen and walks them through the changes. The reviewer asks questions, points out issues, and the author makes corrections on the spot.
This approach is fast, low-overhead, and works well for small changes. The downside is that there is no documentation. There is no record of what was discussed, what was changed, or why. If the same mistake happens again later, there is no review comment to point to. Over-the-shoulder reviews also tend to be less thorough because the reviewer is seeing the code for the first time without preparation.
Tool-Assisted / PR-Based Review
This is the dominant approach in modern software development. The author creates a pull request (or merge request in GitLab’s terminology), writes a description of the changes, and assigns reviewers. Reviewers read the diff asynchronously, leave inline comments, and approve or request changes. The entire conversation is documented and searchable.
Tool-assisted review strikes a good balance between thoroughness and speed. It gives reviewers time to read the code carefully without the scheduling overhead of a formal inspection. The written comments create a permanent record of design decisions and trade-offs. And the workflow integrates naturally with CI/CD pipelines, so automated tests, linting, and other checks run alongside the human review.
Platforms like GitHub, GitLab, and Bitbucket have made this workflow so frictionless that it is now the default for teams of all sizes. If you want to understand the details of this workflow, the next chapter on the code review process breaks it down step by step.
AI-Assisted Review
The newest category of code review uses artificial intelligence (typically large language models) to automatically analyze pull requests and generate review comments. Tools like CodeRabbit, GitHub Copilot, and SonarQube can scan a PR within minutes of it being opened, flag potential bugs, security issues, and code quality problems, and even suggest one-click fixes.
AI-assisted review does not replace human review. Instead, it handles the first pass, catching the mechanical issues so that human reviewers can focus on design, architecture, and business logic. Our detailed comparison of AI code review vs. manual review goes deep on where each approach excels and where it falls short.
The Code Review Workflow
While every team has its own variations, the modern code review workflow generally follows this sequence:
- Author creates a branch and makes changes to the code.
- Author opens a pull request with a clear title, description, and any relevant context (linked issues, screenshots, test results).
- Automated checks run. CI builds the code, runs tests, and lints for style issues. Some teams also run AI code review tools at this stage.
- Reviewers are assigned, either manually, through code ownership rules, or via round-robin assignment.
- Reviewers examine the diff. They read the code, leave inline comments on specific lines, and provide an overall assessment.
- Author addresses feedback. They make changes, respond to comments, and push updates.
- Reviewers re-review. They verify that feedback was addressed and check any new changes.
- Approval and merge. Once all reviewers approve and CI checks pass, the code is merged into the main branch.
This workflow is covered in depth in Chapter 2: The Code Review Process, where we break down each step with concrete best practices.
Common Misconceptions
“Code review is for catching bugs.” Bug detection is one benefit, but it is not the primary one. Studies consistently show that code review catches relatively few critical bugs. Most of those are caught by tests. The greater value of code review lies in knowledge sharing, code consistency, design improvement, and mentoring.
“Only senior developers should review code.” This is a damaging misconception. Junior developers benefit enormously from reviewing code because it teaches them patterns, architecture, and the codebase. A junior may not catch subtle concurrency bugs, but they will ask clarifying questions that force the author to think about readability. Cross-level reviews in both directions make the entire team stronger.
“Code review slows us down.” This is the most common objection, and it is understandable. Code review does add latency to the merge process. But the alternative (shipping unreviewed code) creates far more slowdown through bugs, regressions, inconsistent codebases, and knowledge silos. Teams that skip code review move fast initially but slow down dramatically as complexity grows. For strategies on reducing review cycle time without sacrificing quality, see our guide on how to reduce code review time.
“If we have good tests, we don’t need code review.” Tests verify that code produces correct outputs for specific inputs. Code review examines whether the code is well-designed, readable, maintainable, and secure. A function can pass all its tests while being poorly named, badly structured, and containing a SQL injection vulnerability. Tests and reviews are complementary, not interchangeable.
“Code review is about finding what’s wrong.” Good code review also acknowledges what is right. Leaving a comment that says “this is a really clean abstraction” or “nice use of the strategy pattern here” reinforces good practices and makes the review process feel collaborative rather than adversarial.
When to Skip Code Review (Spoiler: Rarely)
There are a small number of situations where skipping code review can be justified:
- Genuine emergencies. If production is down and a one-line fix resolves the issue, merging without review is acceptable, as long as the change is reviewed retroactively (a “post-merge review”) and the emergency process is documented.
- Trivial changes. Updating a version number in a config file, fixing a typo in a comment, or bumping a dependency with no breaking changes may not need the same level of scrutiny as a feature change. Many teams define a lightweight process for these, like a single reviewer who gives a quick approval rather than a deep review.
- Automated changes. If a tool generates a change (like a migration file or a dependency lockfile update), and the tool is trusted, some teams skip manual review on these. However, AI-generated code should still be reviewed because the tools are not infallible.
Outside of these narrow cases, code review should be mandatory. The cost is low, typically 15-30 minutes of a reviewer’s time for a well-scoped PR. The benefits compound over the lifetime of the codebase.
One pattern that works well is to require review but adjust the level of scrutiny based on the risk of the change. A database migration gets a deep review from a senior engineer. A CSS tweak gets a quick look from anyone available. This tiered approach keeps the review process from becoming a bottleneck while maintaining quality where it matters most. For teams looking to further accelerate their process, automating parts of the review with AI tools can dramatically reduce the manual burden.
What You Will Learn in This Course
This chapter introduced the what and why of code review. The rest of this course gets practical. Here is what is coming:
Chapter 2: The Code Review Process breaks down the modern pull request workflow step by step, from writing a great PR description to choosing reviewers to merging with confidence. You will learn the lifecycle of a review and common anti-patterns to avoid.
Chapter 3: How to Review a Pull Request is a hands-on guide to reviewing code effectively. You will learn the 5-pass review method, how to prioritize your feedback, and how to write review comments that actually help the author improve.
Subsequent chapters will cover how to write reviewable code, code review best practices and anti-patterns, reviewing for security, automating code review with AI tools, and scaling review processes for growing teams. By the end, you will have a complete, practical playbook for making code review one of your team’s greatest strengths.
Whether you are a junior developer doing your first review or a tech lead trying to improve your team’s review culture, this course will give you actionable techniques you can apply immediately. The next chapter starts with the mechanics: how the code review process actually works.
Frequently Asked Questions
What is the main purpose of code review?
The main purpose of code review is to improve code quality by having other developers examine changes before they merge. This catches bugs, ensures adherence to coding standards, shares knowledge across the team, and improves the overall maintainability of the codebase.
Is code review the same as testing?
No. Code review is a human (or AI-assisted) inspection of source code changes, while testing is automated verification that code behaves correctly. They are complementary. Code review catches design issues, readability problems, and logic errors that tests may miss, while tests catch runtime bugs that reviewers may overlook.
How long should a code review take?
Research suggests reviewers should spend no more than 60-90 minutes per session and review no more than 200-400 lines of code at a time. Reviews longer than that see diminishing returns as reviewer attention drops significantly.
Continue Learning
Newsletter
Stay ahead with AI dev tools
Weekly insights, no spam.
CodeRabbit Review
GitHub Copilot Code Review Review
SonarQube Review