Metrics

Lead Time for Changes

A DORA metric measuring the time from code commit to successful deployment in production, indicating the efficiency of the delivery pipeline.

What Is Lead Time for Changes?

Lead time for changes is a software delivery metric that measures the elapsed time between when a code change is committed to version control and when that change is successfully running in production. It is one of the four key DORA (DevOps Research and Assessment) metrics used to assess software delivery performance. Lead time captures the total efficiency of the delivery pipeline — encompassing code review, CI/CD execution, testing, approval processes, and deployment.

The metric borrows its name from lean manufacturing, where lead time refers to the duration between placing an order and receiving the finished product. In software, the “order” is a code commit and the “product” is the change running in production. A short lead time means the organization can move quickly from idea to impact. A long lead time means friction exists somewhere in the pipeline that delays value delivery.

The DORA research classifies teams into performance tiers based on their lead time:

Performance LevelLead Time for Changes
EliteLess than one hour
HighBetween one day and one week
MediumBetween one week and one month
LowBetween one month and six months

Elite teams — those with lead times under one hour — have automated virtually every step between commit and production. Low-performing teams with lead times measured in months have significant manual processes, approval bottlenecks, or infrequent release schedules that delay changes.

How It Works

Lead time for changes is calculated as the time difference between a commit timestamp and the timestamp of its successful deployment to production:

Lead Time = Deployment Timestamp - Commit Timestamp

In practice, measuring lead time requires correlating data from two systems: the version control system (which records when commits were made) and the deployment system (which records when deployments completed). Here is an example of how to calculate lead time programmatically:

# Calculate lead time for changes from Git and deployment data
from datetime import datetime, timezone
import subprocess
import json

def get_commits_in_deployment(deploy_sha: str, previous_sha: str) -> list:
    """Get all commits included in a deployment."""
    result = subprocess.run(
        ["git", "log", "--format=%H|%aI|%s", f"{previous_sha}..{deploy_sha}"],
        capture_output=True, text=True
    )
    commits = []
    for line in result.stdout.strip().split("\n"):
        if line:
            sha, timestamp, message = line.split("|", 2)
            commits.append({
                "sha": sha,
                "committed_at": datetime.fromisoformat(timestamp),
                "message": message
            })
    return commits

def calculate_lead_times(deploy_time: datetime, commits: list) -> dict:
    """Calculate lead time for each commit in a deployment."""
    lead_times = []
    for commit in commits:
        lead_time = deploy_time - commit["committed_at"]
        lead_times.append(lead_time.total_seconds() / 3600)  # hours

    return {
        "deployment_time": deploy_time.isoformat(),
        "num_commits": len(commits),
        "median_lead_time_hours": sorted(lead_times)[len(lead_times) // 2],
        "max_lead_time_hours": max(lead_times),
        "min_lead_time_hours": min(lead_times),
    }

Several nuances affect how lead time is measured:

  • Which commit to use — A single deployment often includes multiple commits. Teams typically measure the lead time for the oldest commit in the deployment, which represents the worst case, or report the median across all included commits.
  • Merge commits vs. original commits — When a pull request is merged, both the original commit timestamp and the merge commit timestamp are available. Using the original commit timestamp captures the full wait time, including code review.
  • Business hours adjustment — A commit made Friday evening that deploys Monday morning has a raw lead time of ~60 hours, but only a few hours of that were actionable. Some teams adjust for business hours to get a fairer picture.

Why It Matters

Lead time for changes is one of the strongest predictors of overall engineering effectiveness. The DORA research consistently shows that organizations with shorter lead times outperform on every dimension: they deploy more frequently, experience fewer failures, recover from incidents faster, and report higher levels of team satisfaction.

Short lead times create a tight feedback loop between developers and users. When a developer can see their change in production within an hour of committing it, they can observe its real-world impact, learn from user behavior, and iterate quickly. When lead time stretches to weeks or months, this feedback loop breaks down. Developers move on to other work, lose context, and cannot respond to issues promptly.

Lead time is also a diagnostic metric. A long lead time pinpoints where friction exists in the delivery process. If the time between commit and PR approval is the largest component, the bottleneck is code review. If the time between approval and deployment is the largest component, the bottleneck is the deployment process. By decomposing lead time into its constituent stages, teams can identify and address the specific bottleneck that will yield the greatest improvement.

From a competitive standpoint, lead time determines how quickly an organization can respond to market opportunities, customer feedback, and security vulnerabilities. A company that can go from bug report to production fix in one hour has a fundamentally different competitive position than one that takes two weeks.

Best Practices

  • Decompose lead time into stages. Break the total lead time into components: coding time, review wait time, review iteration time, CI pipeline duration, and deployment time. This decomposition reveals which stage contributes the most delay and where improvement efforts should focus.

  • Reduce batch size. The most effective way to reduce lead time is to make changes smaller. Small pull requests are reviewed faster, tested faster, and deployed faster. A team that deploys one commit at a time will always have a shorter lead time than one that batches 50 commits into a weekly release.

  • Automate the deployment pipeline. Any manual step in the pipeline — manual testing, manual approval, manual deployment — adds hours or days to lead time. Automate testing, use auto-merge for approved PRs, and implement continuous deployment to eliminate manual delays.

  • Set explicit review SLAs. Code review wait time is often the largest component of lead time. Establish team norms like “first review within 4 hours” and track adherence. Consider using automated review tools to provide immediate feedback and reduce the number of human review rounds.

  • Track lead time as a team metric. Display lead time on team dashboards and review trends in retrospectives. Treat increasing lead time as an early warning sign that requires investigation, similar to how you would treat increasing error rates in production.

Common Mistakes

  • Measuring only pipeline duration. CI/CD pipeline execution time is a component of lead time, but it is often not the largest component. A 10-minute pipeline does not help if the pull request waits 3 days for code review. Measure the full end-to-end duration from commit to production.

  • Optimizing for average instead of tail latency. A team with a 2-hour median lead time might have a 95th percentile of 5 days. Those long-tail deployments often represent the most important changes — complex features, critical bug fixes — that get stuck in lengthy review or approval processes. Optimize for the tail, not just the average.

  • Ignoring the relationship with batch size. Teams sometimes try to reduce lead time by speeding up individual pipeline stages without addressing the root cause: large batch sizes. A deployment containing 100 commits will always have a long lead time for the oldest commits, regardless of how fast the pipeline runs. Reducing batch size is typically more impactful than speeding up the pipeline.

Related Terms

Learn More

Related Articles

Free Newsletter

Stay ahead with AI dev tools

Weekly insights on AI code review, static analysis, and developer productivity. No spam, unsubscribe anytime.

Join developers getting weekly AI tool insights.