Cyclomatic Complexity
A quantitative measure of independent paths through a function's source code, calculated by counting decision points like if, while, and for statements.
What Is Cyclomatic Complexity?
Cyclomatic complexity is a software metric that quantifies the structural complexity of a function or method by counting the number of linearly independent paths through its source code. Introduced by Thomas J. McCabe in 1976, it provides an objective, numeric measure of how complex a piece of code is, independent of its length or the programming language used.
The metric is calculated by counting decision points — if, else if, while, for, case, catch, &&, ||, and ternary operators — and adding one. A function with no branches has a cyclomatic complexity of 1 (a single straight-line path). Each decision point adds one additional path, increasing the complexity score.
Cyclomatic complexity is one of the most widely used code quality metrics in the industry. It appears in code analysis tools like SonarQube, ESLint, and pylint, and is used during code review to flag functions that may be too complex to understand, test, or maintain reliably. While no single number can capture the full picture of code quality, cyclomatic complexity provides a useful, objective starting point for identifying problematic code.
How It Works
The calculation is straightforward: count the number of decision points in a function and add one.
# Cyclomatic Complexity: 1 (no decision points)
def greet(name):
return f"Hello, {name}"
# Cyclomatic Complexity: 3 (two decision points: if, elif)
def classify_age(age):
if age < 18:
return "minor"
elif age < 65:
return "adult"
else:
return "senior"
# Cyclomatic Complexity: 6 (high — harder to test and maintain)
def process_order(order):
if not order.items: # +1
return Error("Empty order")
if order.total > 10000: # +1
if not order.customer.is_verified: # +1
return Error("Verification required")
for item in order.items: # +1
if item.quantity <= 0: # +1
return Error(f"Invalid quantity: {item.name}")
return Success(order)
The formal mathematical definition uses graph theory. If you model a function as a control flow graph where nodes represent statements and edges represent flow between them:
Cyclomatic Complexity = E - N + 2P
Where:
E = number of edges in the control flow graph
N = number of nodes in the control flow graph
P = number of connected components (usually 1 for a single function)
In practice, most developers and tools use the simpler counting method: start at 1, add 1 for each if, else if, for, while, case, catch, &&, ||, and ?:.
Industry thresholds:
| Complexity | Risk Level | Recommendation |
|---|---|---|
| 1-10 | Low | Simple, easy to test |
| 11-20 | Moderate | Consider refactoring |
| 21-50 | High | Refactoring strongly recommended |
| 50+ | Very High | Untestable, refactor immediately |
Why It Matters
Cyclomatic complexity directly correlates with several important quality characteristics.
Testability. The cyclomatic complexity of a function defines the minimum number of test cases needed for complete branch coverage. A function with complexity 10 requires at least 10 test cases. Functions with complexity above 20 become prohibitively expensive to test thoroughly, meaning they likely contain untested paths that harbor undiscovered bugs.
Defect probability. Research by McCabe and others found a strong correlation between cyclomatic complexity and the probability of defects. Functions with complexity above 10 are significantly more likely to contain bugs than simpler functions. This relationship has been validated across multiple studies and programming languages.
Maintenance difficulty. Complex functions are harder to read, harder to understand, and harder to modify without introducing regressions. When a developer encounters a function with complexity 30, they must mentally track 30 possible execution paths to understand all the ways the code can behave — a cognitive burden that frequently leads to mistakes.
Code review effectiveness. Reviewers are more likely to miss bugs in highly complex functions. The cognitive effort required to follow all paths through complex code exceeds what a human reviewer can reliably maintain. Flagging functions that exceed a complexity threshold during code review focuses human attention where it is most needed.
Refactoring priority. Cyclomatic complexity provides an objective basis for prioritizing refactoring work. Instead of debating which code “feels” complex, teams can sort functions by complexity score and address the worst offenders first.
Best Practices
- Set a team-wide complexity threshold. Most teams use 10 or 15 as the maximum allowed cyclomatic complexity for a single function. Configure your linter or static analysis tool to flag violations automatically.
// ESLint configuration for complexity threshold
{
"rules": {
"complexity": ["error", { "max": 10 }]
}
}
-
Refactor using Extract Method. The most common technique for reducing cyclomatic complexity is extracting conditional branches and loop bodies into separate, well-named functions. Each extracted function carries its own complexity, but the overall structure becomes easier to understand.
-
Replace conditionals with polymorphism. When a function contains a long chain of
if/else iforswitchstatements that dispatch on type, consider using the Strategy pattern or polymorphic method dispatch instead. This distributes complexity across classes rather than concentrating it in one function. -
Use guard clauses to reduce nesting. Early returns for error conditions flatten the function’s structure and reduce cyclomatic complexity by eliminating nested
elseblocks.
Common Mistakes
-
Treating cyclomatic complexity as the only measure of code quality. A function with complexity 5 can still be poorly written — bad naming, unclear intent, or inappropriate coupling are not captured by this metric. Use cyclomatic complexity alongside other metrics like cognitive complexity, code duplication, and test coverage for a complete picture.
-
Reducing complexity through obfuscation. Moving conditionals into ternary expressions, using short-circuit evaluation tricks, or hiding branching behind helper functions does not actually reduce complexity — it just hides it. The goal is to simplify the logic, not to game the metric.
-
Applying the same threshold to all code. A parser, state machine, or protocol handler may legitimately require higher complexity than business logic. Allow reasonable exceptions for code that is inherently branchy, and focus enforcement on code where complexity is a sign of poor design rather than inherent problem difficulty.
Related Terms
Learn More
Tool Reviews
Free Newsletter
Stay ahead with AI dev tools
Weekly insights on AI code review, static analysis, and developer productivity. No spam, unsubscribe anytime.
Join developers getting weekly AI tool insights.
SonarQube