Testing

Line Coverage

The simplest code coverage metric, measuring the percentage of executable lines of code that were run during test execution.

What Is Line Coverage?

Line coverage is the most basic and widely used code coverage metric. It measures the percentage of executable lines in a codebase that were executed during the test suite run. If a source file has 100 executable lines and tests cause 85 of them to run, the line coverage for that file is 85%.

Line coverage is popular because it is intuitive. Developers can look at a coverage report, see which lines are green (covered) and which are red (uncovered), and immediately understand which parts of their code lack tests. This visual clarity makes line coverage the default metric in most coverage tools, and it is often the first (or only) coverage metric teams adopt.

However, line coverage has important limitations. It cannot distinguish between different execution paths through the same line. A ternary expression, a short-circuit evaluation, or a multi-condition if statement can all be “covered” by executing the line once, even if only one of the possible paths was tested. For this reason, line coverage should be used alongside branch coverage to provide a more complete picture of test thoroughness.

Non-executable lines — comments, blank lines, import statements, and type declarations — are excluded from line coverage calculations. Only lines that contain runtime-executable logic are counted, which means the denominator changes as you add or remove comments. Most coverage tools handle this distinction automatically.

How It Works

Coverage tools track line coverage by inserting counters at each executable line. When a test suite runs, the counters increment for every line that executes. After all tests complete, the tool divides the number of executed lines by the total executable lines to calculate the percentage.

Here is a practical example in JavaScript:

// validator.js
function validateEmail(email) {
  if (!email) {                          // line 2
    return { valid: false, error: "Email is required" };  // line 3
  }                                      // line 4

  const pattern = /^[^\s@]+@[^\s@]+\.[^\s@]+$/;  // line 6
  const isValid = pattern.test(email);   // line 7

  if (!isValid) {                        // line 9
    return { valid: false, error: "Invalid email format" };  // line 10
  }                                      // line 11

  return { valid: true, error: null };   // line 13
}

If you write only one test:

test("accepts valid email", () => {
  const result = validateEmail("user@example.com");
  expect(result.valid).toBe(true);
});

Lines 2, 6, 7, 9, and 13 are executed. Lines 3 and 10 are not. Line coverage: 5 out of 7 executable lines = 71%.

Adding tests for the error cases:

test("rejects empty email", () => {
  const result = validateEmail("");
  expect(result.error).toBe("Email is required");
});

test("rejects malformed email", () => {
  const result = validateEmail("not-an-email");
  expect(result.error).toBe("Invalid email format");
});

Now all 7 lines are executed, yielding 100% line coverage.

In Python, using coverage.py:

# validator.py
import re

def validate_email(email):
    if not email:
        return {"valid": False, "error": "Email is required"}

    pattern = r'^[^\s@]+@[^\s@]+\.[^\s@]+$'
    if not re.match(pattern, email):
        return {"valid": False, "error": "Invalid email format"}

    return {"valid": True, "error": None}
coverage run -m pytest test_validator.py
coverage report --show-missing
Name            Stmts   Miss  Cover   Missing
----------------------------------------------
validator.py        8      0   100%

The Missing column lists the line numbers of uncovered lines, giving developers a precise roadmap for where to add tests.

Why It Matters

Line coverage is the baseline metric that prevents entire functions and code paths from going untested. Without any coverage measurement, it is common for significant portions of a codebase to have no tests at all — developers write tests for the features they are building and forget about error handlers, utility functions, and edge case logic that they assume works correctly.

Line coverage makes untested code visible. A coverage report showing that the payment retry logic has 0% line coverage is a clear signal that no one has ever tested what happens when a payment fails and the system attempts a retry. This kind of gap is invisible without coverage data and is often where production incidents originate.

For teams starting their testing journey, line coverage provides the simplest on-ramp. It requires no special tooling beyond a standard coverage runner, produces immediately understandable reports, and gives developers a clear and measurable goal to work toward. As testing maturity grows, teams can layer branch coverage and mutation testing on top of line coverage to deepen their analysis.

Line coverage also serves as a proxy for code maintainability. Modules with consistently high line coverage tend to be better structured — testable code is inherently modular, with clear interfaces and separated concerns. The act of achieving high line coverage often drives developers toward better architecture, even when that was not the original intent.

Best Practices

  • Use line coverage as a starting point, not the finish line. Line coverage tells you what code was executed, but not whether it was validated. Combine it with branch coverage and meaningful assertions for a complete picture.
  • Set pragmatic thresholds. Most teams find 80% line coverage to be a reasonable minimum for new code. Going below that suggests significant testing gaps; demanding 100% creates diminishing returns.
  • Display coverage inline in your editor. Tools like the Coverage Gutters extension for VS Code highlight uncovered lines directly in the editor, giving developers real-time feedback as they write code and tests.
  • Track per-file coverage, not just global averages. A codebase with 85% global coverage might have critical modules at 40%. Report coverage per file and per module to surface these hidden risks.
  • Exclude trivially untestable code. Configuration files, generated code, and framework entry points often cannot be meaningfully unit-tested. Exclude them from coverage calculations with tool-specific ignore directives to keep the metric focused on code that benefits from testing.

Common Mistakes

  • Treating line coverage as proof of test quality. A test that calls a function but does not check the result achieves line coverage without testing anything. Always pair line coverage with assertion-based validation.
  • Optimizing for the coverage number instead of for safety. When a team’s goal is “reach 90% coverage,” developers sometimes write trivial or meaningless tests to hit the target. Coverage should guide testing priorities, not define them.
  • Ignoring the gap between line and branch coverage. A module at 90% line coverage and 55% branch coverage has many conditional paths that were never tested. If you only track line coverage, these gaps remain invisible.
  • Not accounting for dead code. Unreachable code that can never execute still lowers line coverage percentages. Remove dead code from the codebase rather than writing tests that artificially reach it through reflection or other hacks.

Related Terms

Learn More

Related Articles

Free Newsletter

Stay ahead with AI dev tools

Weekly insights on AI code review, static analysis, and developer productivity. No spam, unsubscribe anytime.

Join developers getting weekly AI tool insights.