Continuous Delivery
A practice where code changes are automatically built, tested, and prepared for release to production, requiring only a manual approval step before deployment.
What Is Continuous Delivery?
Continuous Delivery (CD) is a software engineering practice where every code change that passes automated testing is automatically prepared for release to production. The distinguishing characteristic of continuous delivery is that while the process up to deployment is fully automated, the final release to production requires an explicit human decision — a manual approval or button click. This gives teams the confidence of automation with the control of a deliberate release decision.
Continuous delivery was formalized by Jez Humble and David Farley in their 2010 book Continuous Delivery: Reliable Software Releases through Build, Test, and Deploy Automation. The practice builds directly on continuous integration: where CI ensures that code compiles and tests pass, continuous delivery extends the pipeline to ensure that the resulting artifact is deployable at any moment.
The key insight behind continuous delivery is that releasing software should be a business decision, not a technical one. When the pipeline ensures that every build is production-ready, the question shifts from “can we deploy?” to “should we deploy?” Product managers, not engineers, can decide when features reach users — because the technical risk has already been managed by automation.
How It Works
A continuous delivery pipeline extends the CI pipeline with additional stages that validate the artifact in progressively more production-like environments:
# Example GitLab CI/CD pipeline with continuous delivery
stages:
- build
- test
- staging
- production
build:
stage: build
script:
- docker build -t myapp:$CI_COMMIT_SHA .
- docker push registry.example.com/myapp:$CI_COMMIT_SHA
test:
stage: test
script:
- npm ci
- npm run test:unit
- npm run test:integration
- npm run test:e2e
deploy_staging:
stage: staging
script:
- kubectl set image deployment/myapp myapp=registry.example.com/myapp:$CI_COMMIT_SHA
- ./scripts/run-smoke-tests.sh
environment:
name: staging
deploy_production:
stage: production
script:
- kubectl set image deployment/myapp myapp=registry.example.com/myapp:$CI_COMMIT_SHA
environment:
name: production
when: manual # <-- This is the key: manual trigger
The pipeline typically flows through these stages:
- Build — Compile the application and package it as a deployable artifact (Docker image, JAR file, binary).
- Automated testing — Run unit, integration, and end-to-end tests against the artifact.
- Deploy to staging — Deploy the artifact to a staging environment that mirrors production.
- Acceptance testing — Run smoke tests, performance tests, and potentially manual exploratory testing in staging.
- Deploy to production — With a manual trigger, deploy the validated artifact to production.
The critical requirement is that the same artifact flows through every environment. You never rebuild for production — you promote the exact binary that was tested in staging. This ensures that what you tested is what you deploy.
Why It Matters
Continuous delivery transforms deployment from a high-risk, high-stress event into a routine, low-risk operation. When every build is production-ready and the pipeline has validated it through multiple environments, the act of deploying becomes trivially simple. This has several important consequences.
First, it dramatically reduces the risk of each release. Because changes are deployed in small increments — often just a single feature or fix — the blast radius of any problem is limited. If something goes wrong, the cause is easy to identify because the diff is small. Rolling back is straightforward because the previous artifact is still available.
Second, continuous delivery enables rapid response to urgent needs. When a critical security vulnerability is disclosed, a team practicing continuous delivery can push a fix to production in minutes rather than days. The pipeline is already in place, the automation is already trusted, and the deployment process is already practiced.
Third, it aligns engineering and business goals. Product teams can ship features on their own schedule — launching on a specific date, coordinating with marketing campaigns, or rolling out to specific customer segments — without being constrained by engineering release cycles.
Best Practices
-
Invest in environment parity. Staging environments should be as close to production as possible in terms of configuration, data volume, and network topology. Differences between environments are the primary source of “works in staging, fails in production” problems.
-
Automate everything except the final approval. Every step between a code commit and a production-ready artifact should run without human intervention. Manual steps in the middle of the pipeline create bottlenecks and introduce inconsistency.
-
Version your artifacts immutably. Tag every build artifact with the commit SHA or a unique version number. Never overwrite an artifact — if you need to roll back, you should be able to deploy any previous version instantly.
-
Include non-functional testing. Beyond correctness tests, run performance benchmarks, security scans, and accessibility checks in the pipeline. These automated checks prevent regressions that functional tests miss.
-
Practice deploying frequently. A pipeline that is only used once a month will not be trusted when you need to deploy urgently. Deploy to production at least weekly — ideally daily — to keep the process exercised and the team confident.
Common Mistakes
-
Confusing continuous delivery with continuous deployment. These are related but distinct practices. Continuous delivery means every change is deployable; continuous deployment means every change is deployed. The difference is the manual approval gate. Teams should choose the model that fits their risk tolerance and regulatory requirements.
-
Rebuilding artifacts for each environment. If you build separately for staging and production, you are not testing what you deploy. Build once, then promote the same artifact through every environment. Environment-specific configuration should be injected at deploy time, not baked into the artifact.
-
Neglecting the rollback process. Continuous delivery is only safe if rollbacks are fast and reliable. Test your rollback procedure regularly — do not wait for an actual incident to discover that your rollback script has a bug or that your database migration is irreversible.
Related Terms
Learn More
Tool Reviews
Free Newsletter
Stay ahead with AI dev tools
Weekly insights on AI code review, static analysis, and developer productivity. No spam, unsubscribe anytime.
Join developers getting weekly AI tool insights.
Amazon Q Developer
Augment Code
Bito AI
Claude Code