DevOps & CI/CD

Pipeline

An automated sequence of stages — build, test, analyze, deploy — that code changes flow through from commit to production in a CI/CD system.

What Is a Pipeline?

A pipeline in the context of software development is an automated workflow that processes code changes through a defined sequence of stages — typically building, testing, analyzing, and deploying — from the moment a developer commits code to the point it reaches production. Each stage in the pipeline performs a specific task and acts as a quality gate: if a stage fails, the pipeline stops, preventing defective code from progressing further.

The concept borrows from manufacturing, where an assembly line moves a product through discrete stations, each adding value or performing quality checks. In software, the “product” is a code change, and the “stations” are automated steps like compilation, unit testing, security scanning, and deployment. Just as a manufacturing defect caught early on the assembly line is cheaper to fix than one caught after the product ships, a bug caught in the test stage of a pipeline is far cheaper to fix than one discovered in production.

Pipelines are the backbone of CI/CD systems. Every major CI/CD platform — GitHub Actions, GitLab CI/CD, Jenkins, CircleCI, Azure DevOps — organizes automation around the pipeline concept. While the configuration syntax varies between platforms, the underlying model is consistent: define stages, define the jobs within each stage, specify dependencies between them, and let the system execute them automatically.

How It Works

A pipeline is typically defined in a configuration file stored alongside the application source code. This “pipeline as code” approach means the pipeline definition is version-controlled, reviewable, and reproducible — the same advantages that version control brings to application code.

Here is a representative pipeline definition:

# .gitlab-ci.yml
stages:
  - build
  - test
  - security
  - deploy

build:
  stage: build
  script:
    - docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
    - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
  artifacts:
    paths:
      - dist/

unit-tests:
  stage: test
  script:
    - npm ci
    - npm run test:unit -- --coverage
  coverage: '/Statements\s+:\s+(\d+\.?\d*)%/'

integration-tests:
  stage: test
  script:
    - docker-compose up -d
    - npm run test:integration
    - docker-compose down
  services:
    - postgres:15

security-scan:
  stage: security
  script:
    - trivy image $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
    - npm audit --production

deploy-production:
  stage: deploy
  script:
    - kubectl set image deployment/app app=$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
  only:
    - main

Pipelines execute according to a dependency graph. Stages run sequentially by default — all jobs in the “build” stage must complete before “test” jobs begin. Within a stage, jobs can run in parallel, which is why unit-tests and integration-tests above would execute simultaneously, reducing total pipeline duration.

Key pipeline concepts include:

  • Triggers — Events that start the pipeline (code push, pull request, schedule, API call).
  • Stages — Logical groupings of jobs that run in sequence.
  • Jobs — Individual units of work within a stage.
  • Artifacts — Files produced by one job and consumed by later jobs (compiled binaries, test reports).
  • Caching — Reusing data between pipeline runs to speed up execution (dependency caches, build caches).

Why It Matters

Pipelines provide the automation fabric that makes modern software delivery possible. Without pipelines, teams must rely on manual processes for building, testing, and deploying — processes that are slow, error-prone, and inconsistent. A developer might forget to run a linter. A deployment might use the wrong configuration file. A critical test might be skipped because someone was in a hurry. Pipelines eliminate this class of human error by executing the same steps, in the same order, every time.

Pipelines also create a shared definition of “done.” When the pipeline passes, the team has objective evidence that the code compiles, tests pass, security scans are clean, and the artifact is deployable. This shared standard replaces subjective judgments about readiness and gives teams confidence to deploy frequently.

From a compliance and audit perspective, pipelines produce an immutable record of every step executed, every test result, and every approval. This audit trail is invaluable for teams operating in regulated industries where demonstrating that controls were followed is a legal requirement.

Best Practices

  • Keep pipelines fast. Pipeline duration directly impacts developer productivity. Target under 10 minutes for the core build-and-test cycle. Use parallelization to run independent jobs simultaneously, caching to avoid redundant work, and incremental builds to skip unchanged components.

  • Fail fast. Order pipeline stages so that the quickest, most likely-to-fail checks run first. Linting and unit tests should precede lengthy integration tests and deployment steps. This ensures developers get feedback as quickly as possible.

  • Store pipeline definitions in version control. Pipeline configuration should live in the same repository as the application code. This ensures that changes to the pipeline are reviewed, tested, and versioned alongside the code they support.

  • Use pipeline templates for consistency. In organizations with many repositories, define shared pipeline templates that standardize build, test, and deployment processes. This reduces duplication and ensures that every project meets the same quality bar.

  • Monitor pipeline health. Track pipeline success rates, average duration, and queue wait times. Degradation in these metrics signals growing technical debt in the pipeline itself and should be addressed proactively.

Common Mistakes

  • Building monolithic pipelines. A single pipeline that handles everything — build, test, deploy, notify, generate docs — becomes fragile and slow. Decompose pipelines into focused stages and use pipeline triggers to coordinate between them when needed.

  • Ignoring pipeline maintenance. Pipelines are code, and like all code, they accumulate technical debt. Outdated dependencies, deprecated actions, and accumulated workarounds slow down pipelines and introduce flakiness. Schedule regular pipeline maintenance the same way you schedule application refactoring.

  • Hardcoding environment-specific values. Pipelines that contain hardcoded URLs, credentials, or environment names are fragile and insecure. Use environment variables, secrets management systems, and parameterized configurations to keep pipelines portable and secure.

Related Terms

Learn More

Related Articles

Free Newsletter

Stay ahead with AI dev tools

Weekly insights on AI code review, static analysis, and developer productivity. No spam, unsubscribe anytime.

Join developers getting weekly AI tool insights.