Code Review

Time to First Review

The duration between when a pull request is created and when the first reviewer leaves a comment or approval, indicating team responsiveness.

What Is Time to First Review?

Time to first review (TTFR) measures the elapsed time between the moment a pull request is opened and the moment the first reviewer engages with it — whether through a comment, a change request, or an approval. It is a focused metric that isolates the initial responsiveness of the review process, separate from subsequent rounds of feedback.

TTFR matters because the first review interaction sets the pace for the entire review cycle. A PR that receives its first review within an hour is far more likely to be merged the same day than one that sits untouched for 48 hours. The metric acts as an early warning system: if time to first review is climbing, the overall review pipeline is likely slowing down even if other metrics have not yet reflected it.

Engineering teams use TTFR as both a diagnostic tool and a cultural signal. A consistently low TTFR tells developers that their work will be looked at promptly, encouraging them to submit smaller, more frequent pull requests. A high TTFR discourages this behavior, pushing developers toward large, infrequent PRs that are even harder to review quickly — creating a vicious cycle.

How It Works

The calculation is simple:

Time to First Review = Timestamp of First Review Activity - Timestamp of PR Opened

“First review activity” typically includes any of the following events on the pull request:

  • A line-level or general comment from a reviewer (not the PR author)
  • A formal approval
  • A “changes requested” review submission

Automated bot comments (such as CI status checks or linter output) are usually excluded from TTFR calculations, since the metric is meant to capture human engagement. However, some teams count AI code review tool responses as a valid first review, since tools like CodeAnt AI provide substantive, actionable feedback comparable to a human reviewer.

Here is how a typical TTFR measurement flows in practice:

Developer opens PR           → Clock starts (10:00 AM)
CI pipeline runs             → Not counted (automated)
Linter bot posts comment     → Not counted (automated)
Teammate posts review        → Clock stops (1:30 PM)
Time to First Review         → 3 hours 30 minutes

Most engineering analytics platforms calculate TTFR by parsing pull request event timelines from GitHub, GitLab, or Bitbucket APIs. The key implementation detail is correctly filtering out bot activity and self-reviews (where the author comments on their own PR).

Teams that use draft pull requests need to decide whether the clock starts when the PR is created as a draft or when it is marked as ready for review. The industry standard is to start measurement when the PR transitions to “ready for review,” since drafts are explicitly signaling that the author does not yet want feedback.

Why It Matters

Time to first review is often called the “gateway metric” for developer experience, and for good reason. Research from Google’s engineering productivity team found that reducing TTFR to under 4 hours was the single highest-leverage change teams could make to improve overall development velocity.

The psychological impact is significant. When a developer finishes a pull request, the implementation details are fresh in their mind. If a reviewer responds within a few hours, the developer can quickly address feedback while still in context. If the first review arrives two days later, the developer has moved on to other work and must spend substantial time re-loading the mental model of the change before they can respond to comments.

TTFR also directly influences PR size behavior. Teams with fast first-review times naturally gravitate toward smaller PRs, because developers trust that small changes will not languish in a queue. Teams with slow first-review times see the opposite: developers batch changes into larger PRs to minimize the number of times they have to wait, which paradoxically makes the review queue even slower.

From a team health perspective, a consistently high TTFR often indicates one of three problems: an uneven distribution of review load (one or two people reviewing everything), a lack of clear ownership (nobody knows who should review what), or competing priorities that push code review below feature work.

Best Practices

  • Establish a team SLA for first review. A common target is 4 business hours. Making this expectation explicit turns review responsiveness from an individual habit into a team commitment. Post the SLA in your team’s contributing guidelines.

  • Use automated reviewer assignment. GitHub’s CODEOWNERS file, GitLab’s code owners, or tools like PullApprove automatically assign the right reviewers when a PR is opened. This eliminates the ambiguity of “someone should review this” and ensures every PR has a named, accountable reviewer from the start.

  • Deploy AI review tools for instant first-pass feedback. Tools like CodeAnt AI provide automated review comments within minutes of a PR being opened. While this does not replace human review, it gives the author immediate, actionable feedback to work on while waiting for a teammate — effectively reducing perceived wait time to near zero.

  • Block dedicated review time in calendars. Many high-performing teams schedule 30-minute review blocks twice a day — once in the morning and once after lunch. This cadence ensures that no PR waits more than half a day for attention without requiring reviewers to be interrupt-driven.

  • Make TTFR visible to the team. Display current TTFR on a team dashboard or in Slack via a weekly automated report. Visibility creates gentle peer accountability. When everyone can see that the average first review takes 6 hours, the conversation about improvement starts naturally.

Common Mistakes

  • Counting bot comments as first review. If your analytics tool counts a linter bot or CI status check as “first review activity,” your TTFR will look artificially low. Ensure your measurement filters out non-human, non-substantive interactions to reflect genuine reviewer engagement.

  • Pressuring reviewers to post a quick “LGTM” to hit the SLA. A rushed first review that consists of nothing more than “looks good” technically reduces TTFR but adds no value. It pushes real feedback to later rounds, inflating overall review cycle time while masking the problem. Quality of first review matters as much as speed.

  • Ignoring timezone-adjusted measurement. A globally distributed team will naturally have higher TTFR numbers if the metric does not account for business hours. A PR opened at 5:00 PM in New York and first reviewed at 9:00 AM in London (6 hours later during business hours) should not be penalized the same way as a 6-hour wait within a co-located team. Use business-hours-aware calculations for fair cross-team comparisons.

Related Terms

Learn More

Tool Reviews

Related Articles

Free Newsletter

Stay ahead with AI dev tools

Weekly insights on AI code review, static analysis, and developer productivity. No spam, unsubscribe anytime.

Join developers getting weekly AI tool insights.