Giving and Receiving Code Review Feedback
Learn how to give constructive code review feedback that improves code without damaging relationships — and how to receive criticism gracefully as an author.
16 min read
Why Feedback Quality Matters More Than Review Speed
Most engineering teams obsess over review speed: how quickly a PR gets its first review, how many hours until approval. These metrics matter, and we cover them in depth in the code review metrics chapter. But speed without quality is worse than useless. A fast review that rubber-stamps a security vulnerability or that demoralizes the author with unconstructive criticism creates more problems than it solves.
The quality of feedback in code review directly impacts three outcomes that determine whether code review is a net positive or net negative for your team.
Code quality. Vague feedback like “this doesn’t look right” gives the author nothing actionable. Specific, well-reasoned feedback that explains the problem and suggests a solution leads to genuinely better code. The difference between “fix this” and “this SQL query is vulnerable to injection because the user input isn’t parameterized; consider using prepared statements like this: …” is the difference between a review that frustrates and a review that teaches.
Team relationships. Software development is a team sport, and code review is the most frequent written communication between team members. A senior engineer who consistently leaves dismissive, terse, or condescending review comments will damage team morale far more than the bugs they catch. Conversely, a reviewer who takes the time to explain their reasoning, acknowledge good decisions, and frame suggestions constructively builds trust and psychological safety.
Author growth. For junior and mid-level engineers, code review is one of the most important learning channels. A review that explains why a particular approach is problematic, not just that it is, accelerates the author’s development. Over time, authors internalize the patterns their reviewers teach them, and the quality of first drafts improves across the entire team.
Google’s engineering practices documentation makes a striking claim: the primary purpose of code review is education and knowledge sharing, with defect detection as a secondary benefit. This framing changes how you approach giving feedback. You are not a gatekeeper looking for reasons to reject code. You are a collaborator helping to shape the best possible solution.
The Anatomy of a Great Review Comment
A great review comment has four components. Not every comment needs all four, but the best comments consistently include them.
Location. The comment is attached to the specific line or lines of code it addresses. Modern review tools make this easy, but reviewers sometimes leave top-level comments about issues deep in the diff, forcing the author to hunt for the relevant code.
Observation. The comment clearly states what the reviewer noticed. This should be factual and specific: “This function allocates a new database connection on every call” or “This endpoint doesn’t check whether the requesting user owns the resource.”
Reasoning. The comment explains why the observation matters. “This will exhaust the connection pool under load” or “This means any authenticated user can access any other user’s data by changing the ID in the URL.” Without reasoning, the author has no way to evaluate whether the concern is valid or to generalize the lesson to future code.
Suggestion. The comment offers a concrete path forward. “Consider using the connection pool from the db module” or “Add an ownership check before the query: if resource.owner_id != current_user.id: raise Forbidden().” Suggestions transform criticism into collaboration.
Here is the difference in practice:
// Bad comment
"This is wrong."
// Mediocre comment
"This has a security issue."
// Good comment
"This endpoint retrieves user data by ID from the URL parameter but doesn't
verify that the authenticated user is authorized to access this specific
user's data. An attacker could enumerate user IDs to access other users'
profiles. Consider adding: if requested_user_id != auth.current_user_id
and not auth.is_admin: raise Forbidden()"
The bad comment conveys no information. The mediocre comment identifies the category but not the specific vulnerability. The good comment identifies the exact issue, explains the attack vector, and provides a concrete fix.
Feedback Categories: Blocking, Suggestion, Nitpick, Question, Praise
Not all feedback carries the same weight, and treating every comment as equally important creates friction. Mature teams categorize their review feedback so that authors can prioritize their response.
Blocking (must fix). These are issues that must be resolved before the PR can be merged. Security vulnerabilities, correctness bugs, missing error handling that would cause data loss, and violations of critical team standards fall into this category. Use clear language: “This must be fixed before merge” or tag with a blocker: prefix.
Suggestion (should consider). These are improvements that would make the code meaningfully better but are not strictly required. Better naming, more efficient algorithms, structural improvements. The author should seriously consider them but has the authority to push back with a reasonable justification. Prefix with suggestion: or consider:.
Nitpick (take it or leave it). These are minor style preferences, alternative phrasings, or cosmetic improvements that do not materially affect code quality. The author can address them or ignore them without further discussion. Always prefix with nit: so the author knows these are optional. If most of your review comments are nitpicks, your linter and formatter setup probably needs improvement. See our guide on code review best practices for tooling recommendations.
Question (seeking understanding). Sometimes the reviewer does not understand the author’s intent or the relevant domain context. Framing feedback as a question rather than an assertion is respectful and opens dialogue: “I’m not familiar with this pricing model. Is it intentional that the discount applies before or after tax?” Questions also signal areas where the code might benefit from a clarifying comment.
Praise (reinforcing good patterns). Positive feedback is not just feel-good filler. It reinforces patterns you want to see more of. When an author writes an unusually clean abstraction, a thorough test suite, or an elegant solution to a tricky problem, say so. “This is a really clean separation of concerns. The data transformation logic being pure functions makes it easy to test.” Praise is most effective when it is specific about what is good and why.
Giving Feedback: The Dos and Don’ts
The Dos
Talk about the code, not the person. This is the single most important rule. “This function has a potential race condition” is about the code. “You didn’t think about thread safety” is about the person. The first invites collaboration; the second triggers defensiveness.
Explain the why. Every piece of feedback should include the reasoning behind it. Authors are far more receptive to feedback when they understand the motivation. “We should use const here because this value is never reassigned, and const communicates that intent to future readers” teaches a principle. “Change let to const” is just a command.
Offer concrete alternatives. When you identify a problem, suggest a solution. If there are multiple valid approaches, mention the tradeoffs. “You could extract this into a helper function for reuse, or inline it with a descriptive comment. I’d lean toward extraction since this pattern shows up in two other places.”
Ask questions when you are uncertain. If you are not sure whether something is a bug or an intentional design choice, ask. “Is it intentional that this returns null rather than an empty array when there are no results? The callers I checked seem to expect an array.” This approach respects the author’s context while raising the concern.
Batch your feedback. Review the entire PR before leaving any comments. Nothing is more frustrating for an author than addressing five comments, pushing a fix, and then receiving five more comments on code the reviewer had not gotten to yet. A complete review also lets you assess the overall design rather than commenting line-by-line without context.
Acknowledge the author’s constraints. Sometimes code is not perfect because there is a hard deadline, a legacy system constraint, or an intentional tradeoff. If the PR description mentions these, acknowledge them in your review: “I see you noted the deadline pressure. The approach here is reasonable for now, and I’d suggest filing a follow-up ticket for [specific improvement].”
The Don’ts
Don’t use dismissive language. Phrases like “obviously,” “just do X,” “this is basic,” and “why would you do this?” communicate contempt, even if that is not the reviewer’s intent. These phrases are especially harmful in open-source projects and teams with junior members.
Don’t rewrite the author’s code in comments. Providing a one-line suggestion is helpful. Providing a 30-line rewrite of the author’s approach is not a review comment; it is a competing implementation. If the design needs a fundamental rethink, have a synchronous conversation about the approach before the author invests more time.
Don’t bikeshed. If you are spending three comment threads debating whether a variable should be named userList or users, you are not adding value. Refer to the team style guide if one exists, or defer to the author’s preference on matters of pure taste.
Don’t leave only negative feedback. A review that contains nothing but criticism, even constructive criticism, is dispiriting. Acknowledge what the PR does well. This is not about being artificially nice; it is about providing a complete and accurate assessment. If the test coverage is thorough, say so. If the error handling is well done, say so.
Don’t request changes you would not make yourself. Before asking the author to add extensive documentation, comprehensive error handling for unlikely edge cases, or performance optimization for a cold code path, ask yourself whether you would invest that effort in the same situation. Code review should enforce team standards, not individual gold-plating preferences.
Receiving Feedback: The Author’s Mindset
Receiving feedback on your code is hard. Your code is a product of your thinking, and criticism of your code can feel like criticism of your intelligence. This is a natural reaction, and overcoming it is a skill that takes deliberate practice.
Separate your identity from your code. The PR is not you. A reviewer who finds a bug in your code has not found a flaw in your character. This is easier said than done, but it is the foundational mindset shift that makes productive code review possible.
Assume good intent. When a reviewer leaves a comment that feels harsh, assume they are trying to help the code, not attack you. Tone is notoriously difficult to convey in written text, and what reads as brusque may have been written quickly between meetings with no negative intent. If the pattern of harsh tone persists, address it, but give individual comments the benefit of the doubt.
Respond to every comment. Even if your response is “Done” or “Good point, fixed.” Unacknowledged comments leave the reviewer uncertain whether their feedback was seen, agreed with, or intentionally ignored. This creates friction in future reviews.
Push back with reasons, not ego. If you disagree with feedback, explain your reasoning. “I considered that approach, but chose this one because [specific reason]” is a productive response. “I think my way is fine” is not. Be open to the possibility that the reviewer sees something you missed, and be willing to change your mind.
Thank reviewers who invest time. A thorough, thoughtful review takes significant effort. A simple “Thanks for the detailed review, this is much better now” goes a long way toward building a culture where people invest in giving good feedback.
Handling Disagreements
Disagreements in code review are normal and healthy. Two competent engineers can look at the same problem and reasonably prefer different solutions. The goal is not to eliminate disagreements but to resolve them productively.
Distinguish between objective and subjective concerns. “This has a SQL injection vulnerability” is objective, and there is a clear right answer. “I would have used a strategy pattern here instead of a switch statement” is subjective because both approaches can work. Objective concerns should be resolved on their merits. Subjective concerns should defer to team conventions or, failing that, to the author’s preference.
Move to synchronous communication early. If a PR comment thread exceeds three back-and-forth exchanges on the same point, it is time for a five-minute video call or in-person conversation. Written debate is slow, prone to miscommunication, and visible to the entire team. A quick conversation usually resolves in minutes what would take hours in comments.
Invoke team standards. When a disagreement is about a pattern or convention, check whether the team has a documented standard. If it does, follow it. If it does not, resolve the current disagreement by author’s preference and then propose the standard for the team to discuss in the next engineering meeting. This prevents the same argument from recurring on every PR.
Escalate without ego. If you and the reviewer genuinely cannot agree, escalate to a tech lead or architect. This is not “tattling.” It is using your team’s decision-making structure as intended. Frame the escalation as “We have two reasonable approaches and need a tiebreaker” rather than “This reviewer is being unreasonable.”
Accept the decision and move on. Once a disagreement is resolved, whether by compromise, deference, or escalation, commit to the decision fully. Do not passive-aggressively implement the reviewer’s suggestion in a way that highlights its weaknesses, and do not relitigate the same point on the next PR.
Cultural Considerations in Global Teams
Code review feedback norms vary significantly across cultures, and distributed teams need to be intentional about establishing shared expectations.
Directness varies. In some cultures (Netherlands, Israel, Germany), direct feedback is a sign of respect because it shows the reviewer takes the work seriously enough to be candid. In other cultures (Japan, Korea, many Southeast Asian countries), direct negative feedback can feel confrontational, and concerns are often communicated indirectly or through questions. Neither approach is wrong, but mismatches cause problems.
Hierarchy affects dynamics. In cultures with strong hierarchical norms, junior engineers may be reluctant to leave critical feedback on a senior engineer’s code, even when they spot genuine issues. Teams need to explicitly normalize upward feedback and demonstrate through example that all engineers are expected to review all code critically.
Language proficiency affects tone perception. Non-native English speakers may write comments that sound terse or blunt simply because they are expressing complex ideas in a second (or third) language. Assume good intent, especially with colleagues who are writing in a language that is not their first.
Practical steps for global teams. Document your feedback norms explicitly instead of relying on cultural defaults that only part of the team shares. Use feedback category prefixes (blocker:, nit:, question:) to remove ambiguity. Encourage the use of suggestions in the GitHub/GitLab suggestion feature, which frames feedback as a concrete action rather than abstract criticism.
How AI Is Changing the Feedback Dynamic
AI code review tools like CodeRabbit, PR-Agent, and Ellipsis are fundamentally changing the dynamics of code review feedback. Understanding this shift helps teams use AI and human review as complements rather than competitors.
AI handles the mechanical feedback. Style violations, common bug patterns, missing null checks, and unused imports are all categories where AI feedback is often superior to human feedback because AI is consistent, tireless, and does not have bad days. When AI catches these issues automatically, human reviewers are freed from the tedious work and can focus on design, architecture, and business logic.
AI feedback has no ego. Authors tend to receive AI feedback with less defensiveness than human feedback. Being told by a bot that your function has a potential null pointer dereference feels different from being told the same thing by a colleague. This is an advantage for routine issues, but it means AI cannot provide the mentorship and relationship-building that human review offers. For a deeper exploration of this dynamic, see AI Replacing Code Reviewers.
AI can set the baseline. By the time a human reviewer opens the PR, AI has already flagged the obvious issues. This means the human reviewer can start their review at a higher level, focusing on “Is this the right approach?” rather than “Is there a typo on line 47?” Teams that use AI review tools report that their human review comments shift significantly toward design and architecture discussions.
AI has blind spots. Current AI tools cannot reliably evaluate whether the code solves the right problem, whether the architecture will scale with the team’s roadmap, or whether a particular abstraction creates more complexity than it removes. These judgment calls remain firmly in the human domain. Understanding where AI is strong and where it is weak helps teams allocate review effort effectively. For a thorough comparison, see AI Code Review vs. Manual.
Calibrate AI sensitivity. Most AI review tools allow you to configure how aggressively they flag issues. If your team is drowning in AI-generated nitpicks, tune the sensitivity down. If the AI is missing issues that matter to your team, add custom rules or raise the sensitivity. Treat AI review configuration as an ongoing process, not a one-time setup.
Building a Feedback-Friendly Review Culture
Individual feedback skills matter, but they exist within a team culture that either reinforces or undermines them. Building a review culture where feedback flows freely and constructively requires deliberate investment from engineering leadership.
Lead by example. Senior engineers and tech leads set the tone for review culture. If the most respected engineers on the team give thoughtful, constructive, categorized feedback, others will follow. If senior engineers leave terse, dismissive comments, that becomes the norm regardless of any documented guidelines.
Make feedback norms explicit. Write down your team’s code review expectations. What should a review comment include? What categories do you use? What is the expected turnaround time? How should disagreements be resolved? Documented norms prevent the kind of implicit expectations that trip up new team members. Your team’s code review best practices guide is a good place to start.
Review the reviews. Periodically, have a team discussion about the quality of review feedback itself. Look at recent PR threads and discuss what worked well and what could be improved. This meta-review process normalizes talking about communication quality and surfaces issues before they become entrenched.
Celebrate great reviews. In sprint retrospectives or team meetings, highlight examples of exceptionally helpful review feedback. This signals that the team values review quality, not just review speed.
Address toxic patterns early. If a team member consistently leaves harsh, dismissive, or unconstructive feedback, address it privately and promptly. Left unchecked, one toxic reviewer can undermine the entire team’s willingness to submit code for review. Authors who dread review start writing defensive code or avoiding refactoring to minimize the surface area for criticism, both of which degrade code quality.
Onboard new team members explicitly. When someone joins the team, walk them through your review norms, show them examples of good review feedback from your team’s history, and pair them with a mentor reviewer for their first few PRs. Do not assume they will absorb the culture by osmosis.
The feedback skills in this chapter are not soft skills or nice-to-haves. They are engineering skills that directly impact code quality, team velocity, and developer retention. A team that gives and receives feedback well ships better code faster and retains its best engineers longer. Invest in feedback quality with the same rigor you invest in your testing infrastructure and CI pipeline, because the returns are just as tangible.
Frequently Asked Questions
How do I give negative feedback in a code review without being rude?
Focus on the code, not the person. Use phrases like 'This function could be simplified by...' instead of 'You wrote this wrong.' Ask questions ('Have you considered...?') rather than making demands. Label nitpicks as 'nit:' so the author knows they're optional. Always explain the 'why' behind your suggestion.
What should I do if I disagree with a reviewer's feedback?
Respond respectfully with your reasoning. If you have a technical justification, share it. If it's a matter of preference, defer to team conventions. If you truly can't resolve it, escalate to a tech lead or team discussion — never let disagreements fester in PR comments.
How do I handle receiving harsh code review feedback?
Take a step back before responding emotionally. Focus on the technical content of the feedback, not the tone. If the feedback is valid, acknowledge it and make the change. If the tone is consistently problematic, address it privately with the reviewer or bring it up with your manager.
Continue Learning
Tool Reviews
Newsletter
Stay ahead with AI dev tools
Weekly insights, no spam.
CodeRabbit Review
PR-Agent Review
Ellipsis Review