Fagan Inspection
The original formal software inspection process created by Michael Fagan at IBM in 1976, following six phases: planning, overview, preparation, and inspection.
What Is a Fagan Inspection?
A Fagan inspection is the pioneering formal software inspection methodology developed by Michael Fagan while working at IBM’s Systems Development Division in Kingston, New York. First published in 1976 in the IBM Systems Journal, Fagan’s paper “Design and Code Inspections to Reduce Errors in Program Development” introduced the idea that code review could be transformed from an informal, inconsistent activity into a structured engineering process with predictable, measurable results.
Fagan’s key insight was that most software defects are introduced during design and coding, yet the industry’s primary defect-detection strategy — testing — only catches problems after they have been built into the system. By applying systematic inspection at the point of creation, teams could intercept defects when they were cheapest to fix. His data from IBM showed that inspections removed 60-90% of defects before the first test was ever run, dramatically reducing rework, shortening schedules, and improving final product quality.
The Fagan inspection is distinguished from other review methods by its strict adherence to six phases, clearly defined participant roles, and emphasis on collecting metrics that drive continuous process improvement. It is not simply a meeting where developers look at code together. It is a controlled engineering process with entry criteria, exit criteria, and quantitative feedback loops.
How It Works
The Fagan inspection process consists of six sequential phases, each serving a distinct purpose.
1. Planning. The moderator determines that the code artifact is ready for inspection by verifying entry criteria. The code must compile cleanly, conform to coding standards, and be accompanied by relevant design documentation. The moderator selects inspectors, schedules the sessions, and distributes materials. A well-planned inspection budgets approximately 150 lines of code per hour of meeting time.
2. Overview. The author presents the artifact to the inspection team, providing background on the feature’s purpose, architectural context, and any design decisions that inform the implementation. For teams already familiar with the codebase, this phase may be abbreviated or skipped.
3. Preparation. Each inspector studies the artifact independently, typically spending one to two hours reviewing the code against checklists, specifications, and their own expertise. Inspectors note potential defects, questions, and observations. Fagan emphasized that this phase is non-negotiable: the quality of the inspection meeting depends entirely on the depth of individual preparation.
4. Inspection meeting. The team assembles with the moderator facilitating. A designated reader — not the author — paraphrases the code segment by segment. This forces the team to engage with the logic rather than passively listening to the author’s explanation. As each segment is read, inspectors raise issues from their preparation notes. The moderator classifies and records each defect by type (logic error, data handling, interface mismatch, standards violation) and severity (major or minor). The meeting is strictly time-boxed, typically to two hours. If the artifact is not fully covered, additional sessions are scheduled rather than rushing through the remainder.
5. Rework. The author resolves all logged defects. Each fix is made against the specific defect record, maintaining traceability between the problem identified and the solution implemented.
6. Follow-up. The moderator reviews the author’s fixes to verify that every defect has been addressed and that no new issues have been introduced. If the rework is extensive (typically more than 5% of the artifact changed), a re-inspection is triggered. Once the moderator is satisfied, the inspection is formally closed and metrics are submitted to the organization’s process database.
Fagan defined four key roles in the process. The moderator leads the inspection, ensures procedures are followed, and is responsible for the quality of the process itself. The author created the artifact and answers factual questions but does not defend their code. The reader paraphrases the code to drive the meeting’s pace. The inspector (one or more) brings domain or technical expertise and identifies defects.
Why It Matters
The Fagan inspection is historically significant because it provided the first empirical evidence that structured peer review is one of the most cost-effective quality assurance practices in software engineering. Fagan’s original IBM data showed a 23% increase in coding productivity when inspections were applied, because the time spent on inspection was more than offset by reduced debugging, testing, and field support.
Subsequent studies across hundreds of organizations confirmed his findings. The Software Engineering Institute at Carnegie Mellon integrated inspections into the Capability Maturity Model (CMM), making them a marker of organizational process maturity. Safety-critical industries — avionics (DO-178C), medical devices (IEC 62304), and automotive software (ISO 26262) — adopted Fagan-style inspections as a standard compliance mechanism.
Even in modern agile environments where pull-request-based reviews have largely replaced meeting-driven inspections, the principles Fagan established remain influential. The concepts of preparation before review, separation of defect identification from defect resolution, role-based participation, and metrics-driven improvement all trace directly back to his 1976 paper.
Best Practices
-
Use defect-type checklists. Fagan recommended classifying defects by type to identify systemic patterns. Maintain a checklist of common defect categories (boundary conditions, null handling, concurrency, resource leaks) and train inspectors to check for each one.
-
Measure and calibrate. Track inspection rate, defect density, and preparation time for every session. Fagan’s methodology relies on feedback loops: if defect density is low, either the code is excellent or the inspectors are not looking hard enough. Metrics help distinguish between the two.
-
Train your moderators. The moderator role is the linchpin of the process. A weak moderator allows the meeting to devolve into design debates or personal criticism. Invest in formal moderator training and certification.
-
Respect the rate limit. Fagan’s data showed that inspecting more than 200 lines of code per hour significantly reduces defect detection. Resist pressure to speed up the process. A fast inspection that misses critical defects is worse than no inspection at all.
Common Mistakes
-
Eliminating the preparation phase. Under schedule pressure, teams sometimes skip individual preparation and attempt to review code cold during the meeting. This transforms the inspection into a group reading session and typically reduces defect detection rates by 50% or more. Preparation is the foundation of the entire methodology.
-
Letting the author read the code. When the author narrates their own code, they unconsciously smooth over ambiguities, fill in missing context, and guide attention away from weak spots. The designated reader role exists specifically to prevent this bias. Having a third party paraphrase the code exposes assumptions and gaps that the author would otherwise mask.
-
Treating inspections as optional under deadline pressure. The data consistently shows that time spent on inspections is returned many times over through reduced testing and debugging effort. Cutting inspections to “save time” typically extends the project schedule by introducing defects that are far more expensive to find and fix downstream.
Related Terms
Learn More
Related Articles
Free Newsletter
Stay ahead with AI dev tools
Weekly insights on AI code review, static analysis, and developer productivity. No spam, unsubscribe anytime.
Join developers getting weekly AI tool insights.
Axolo
Bito AI
Codacy
Codara