99 terms covering code review, testing, security, DevOps, and AI in software development.
Testing conducted to determine whether a system meets its business requirements and is acceptable for delivery, typically from the end user's perspective.
AI systems that autonomously plan, execute multi-step tasks, use tools, and make decisions — going beyond prompt-response to complete complex workflows.
AI-powered suggestions that predict and complete code as developers type, offering single-line completions, multi-line blocks, or full functions.
Using artificial intelligence and large language models to automatically analyze code changes and provide review feedback on bugs, security, and quality.
Using an AI coding assistant as a virtual pair programming partner — providing real-time suggestions, answering questions, and helping debug during development.
A positive code review verdict indicating the reviewer has examined the changes and considers them ready to merge, often required from one or more reviewers.
A development approach that expresses test cases in plain language using Given-When-Then format, bridging communication between developers and stakeholders.
The tendency to spend disproportionate time debating trivial code review issues like naming or formatting while overlooking critical architectural flaws.
A code review comment that identifies a critical issue requiring changes before the pull request can be approved and merged.
A deployment strategy maintaining two identical production environments (blue and green), switching traffic between them for zero-downtime releases.
A code coverage metric that measures whether each branch of every conditional statement (if/else, switch) has been executed during testing.
The practice of automating the process of compiling source code, running tests, packaging artifacts, and preparing software for deployment.
A deployment strategy that routes a small percentage of traffic to a new version before full rollout, allowing teams to detect issues with minimal user impact.
A DORA metric measuring the percentage of deployments that result in a failure requiring remediation — such as a rollback, hotfix, or patch.
A set of code changes grouped for review and submission, used primarily in Perforce and Google's internal tools as an alternative to pull requests.
Continuous Integration and Continuous Delivery/Deployment — automatically building, testing, and deploying code changes through a pipeline.
Code that is easy to read, understand, and maintain — following clear naming, single responsibility, minimal complexity, and self-documenting patterns.
A metric measuring how frequently code is rewritten or modified, calculated as lines added, deleted, or changed over a period. High churn signals instability.
A measurement of how much source code is executed during testing, typically reported as a percentage of lines, branches, or functions covered.
Identical or very similar code blocks appearing in multiple places in a codebase, increasing maintenance burden and the risk of inconsistent bug fixes.
A developer or team automatically assigned as a required reviewer for specific files or directories, defined in a CODEOWNERS file.
A systematic examination of source code by one or more developers to find bugs, improve quality, and share knowledge before changes are merged.
A structured list of criteria that reviewers verify during code review, covering correctness, security, performance, readability, and test coverage.
A surface-level indicator of a deeper problem in the code — not necessarily a bug, but a pattern that suggests the code may be difficult to maintain or extend.
A semi-formal review meeting where the code author presents their changes to a group of peers, explaining the design decisions and implementation approach.
A configuration file that defines which developers or teams are automatically requested as reviewers when specific files or paths are modified.
A code metric developed by SonarSource that measures how difficult code is for a human to understand, penalizing nested control flow and non-linear structures.
A practice where code changes are automatically built, tested, and prepared for release to production, requiring only a manual approval step before deployment.
An extension of continuous delivery where every code change that passes all automated tests is automatically deployed to production without manual intervention.
A development practice where developers frequently merge code changes into a shared repository, with each merge automatically triggering builds and tests.
A web security vulnerability that allows attackers to inject client-side scripts into web pages, classified into Stored, Reflected, and DOM-based variants.
Cross-Site Request Forgery — an attack that tricks authenticated users into submitting unintended requests to a web application where they're already logged in.
Common Vulnerabilities and Exposures — a standardized identifier system for publicly known cybersecurity vulnerabilities, maintained by MITRE Corporation.
Common Weakness Enumeration — a community-developed catalog of software and hardware weakness types that can lead to security vulnerabilities.
A quantitative measure of independent paths through a function's source code, calculated by counting decision points like if, while, and for statements.
Dynamic Application Security Testing — testing a running application by sending simulated attacks to find vulnerabilities that only manifest during execution.
Source code that is never executed during program runtime — unreachable code, unused variables, uncalled functions, or commented-out code blocks.
A DORA metric measuring how often a team successfully releases to production, indicating organizational velocity and DevOps maturity.
The overall experience developers have while building software, encompassing tools, processes, documentation, and feedback loops that affect productivity.
Four key metrics from the DORA team for measuring software delivery: deployment frequency, lead time, change failure rate, and mean time to recovery.
Testing a complete application flow from the user's perspective, simulating real user interactions across the full technology stack.
The original formal software inspection process created by Michael Fagan at IBM in 1976, following six phases: planning, overview, preparation, and inspection.
A technique that lets teams enable or disable features at runtime without deploying new code, enabling gradual rollouts and A/B testing.
The process of further training a pre-trained language model on a specific dataset or task to improve its performance for a particular domain or use case.
A rigorous, structured code review process with defined roles (moderator, reader, inspector) and documented defect tracking, based on Fagan's methodology.
An automated testing technique that feeds random, unexpected, or malformed data to a program to discover crashes, memory leaks, and security vulnerabilities.
An operational model where Git is the single source of truth for code and infrastructure, with automated reconciliation between desired and actual state.
Interactive Application Security Testing — combining static and dynamic analysis by instrumenting an application at runtime to detect vulnerabilities.
Managing and provisioning infrastructure through machine-readable configuration files rather than manual processes, enabling version control.
Testing how multiple components or services work together, verifying data flow and interactions between integrated units.
A neural network with billions of parameters trained on massive text datasets, capable of understanding context, generating code, and complex reasoning.
A DORA metric measuring the time from code commit to successful deployment in production, indicating the efficiency of the delivery pipeline.
The simplest code coverage metric, measuring the percentage of executable lines of code that were run during test execution.
Large Language Model — a deep learning model trained on vast amounts of text data that can understand and generate human language, code, and structured data.
Performance testing that simulates expected user load on a system to measure response times, throughput, and resource use under normal and peak conditions.
A composite metric that combines cyclomatic complexity, lines of code, and Halstead volume into a single score (0-100) indicating how easy code is to maintain.
A DORA metric measuring the average time to restore service after a production incident or failure, indicating an organization's incident response capability.
GitLab's equivalent of a pull request — a proposal to merge code changes that triggers review, CI/CD pipelines, and approval workflows.
A software development approach where the entire team works on the same code at the same time, rotating the driver role while everyone else navigates.
A testing technique that replaces real dependencies (databases, APIs, services) with controlled fake objects to isolate the unit under test.
A technique that evaluates test quality by introducing small code changes (mutants) and checking whether existing tests catch them.
A minor, non-blocking code review comment about style preferences, naming conventions, or formatting that does not affect functionality or correctness.
An informal code review where the author walks a colleague through their changes in person or via screen share, without using formal review tooling.
A regularly updated list of the ten most critical web application security risks, published by the Open Web Application Security Project (OWASP).
A practice where two developers work at one workstation — one writes code (driver) while the other reviews in real time (navigator).
A category of testing focused on measuring and validating an application's speed, responsiveness, stability, and resource usage under various conditions.
An automated sequence of stages — build, test, analyze, deploy — that code changes flow through from commit to production in a CI/CD system.
The volume of code changes in a pull request, measured in lines added, modified, or deleted — a key factor in review quality and speed.
The practice of designing and optimizing input prompts to get more accurate, relevant, and useful outputs from large language models.
A request to merge code changes from one branch into another, providing a structured mechanism for code review, discussion, and approval.
Retrieval-Augmented Generation — an AI architecture that enhances LLM responses by retrieving relevant documents from a knowledge base before generating output.
Runtime Application Self-Protection — a security technology embedded within an application that detects and blocks attacks in real time during execution.
Restructuring existing code without changing its external behavior to improve readability, maintainability, or performance while reducing technical debt.
Re-running existing tests after code changes to ensure new modifications haven't broken previously working functionality.
A code review action where the reviewer formally blocks a pull request from merging until the author addresses specific issues raised in the review.
An AI technique combining information retrieval with text generation, allowing LLMs to access up-to-date or domain-specific knowledge beyond training data.
Feedback left by a reviewer on specific lines or sections of code during a pull request review, ranging from blocking issues to optional suggestions.
The elapsed time from when a pull request is opened to when it receives its final approval, measuring the speed of the code review process.
The thoroughness of a code review, measured by reviewer engagement — from surface-level formatting checks to deep analysis of logic, security, and design.
The total elapsed time from when a code review is requested to when all feedback is addressed and the change is approved, a key metric for development velocity.
A deployment strategy that gradually replaces instances of the old version with the new version, one at a time or in batches, to minimize downtime.
A superficial code review where the reviewer approves changes without meaningful examination, providing a false sense of quality assurance.
Static Application Security Testing — analyzing source code without executing it to find vulnerabilities like injection flaws and buffer overflows.
Software Composition Analysis — identifying and managing open-source components in a codebase, checking for known vulnerabilities and license issues.
Automated detection of accidentally committed credentials, API keys, tokens, and other sensitive data in source code repositories and commit history.
A quick set of basic tests that verify the most critical functions of an application work after a build or deployment, acting as a sanity check.
A testing approach that captures a snapshot of a component's output and compares it to a stored reference snapshot to detect unintended changes.
Identifying all open-source and third-party components in a software project to manage security risks, license compliance, and update requirements.
A code injection attack where malicious SQL is inserted into application queries through user input, enabling unauthorized data access or modification.
A workflow where large changes are split into a chain of dependent pull requests, each building on the previous one, enabling incremental review.
Protecting software development processes from attacks targeting build systems, package registries, CI/CD pipelines, or third-party components.
The accumulated cost of shortcuts and deferred maintenance in a codebase that will require future effort to fix, analogous to financial debt with interest.
A metric measuring the percentage of source code that is exercised by automated tests, indicating how thoroughly code is tested.
Tests that produce inconsistent results — sometimes passing, sometimes failing — without code changes, often caused by timing issues or shared state.
A testing strategy recommending many unit tests (base), fewer integration tests (middle), and even fewer end-to-end tests (top) for balanced coverage.
A development practice where tests are written before the implementation code, following a red-green-refactor cycle to drive design and ensure coverage.
The duration between when a pull request is created and when the first reviewer leaves a comment or approval, indicating team responsiveness.
Testing individual functions or methods in isolation to verify they produce the correct output for given inputs, typically automated and fast to run.
Cross-Site Scripting — a vulnerability where attackers inject malicious scripts into web pages viewed by other users, enabling session hijacking or data theft.
Weekly breakdown of AI code review tools, benchmarks, and developer productivity tips. Join 500+ developers.
No spam. Unsubscribe anytime.