Testing

Smoke Testing

A quick set of basic tests that verify the most critical functions of an application work after a build or deployment, acting as a sanity check.

What Is Smoke Testing?

Smoke testing is a quick, high-level set of tests that verifies whether the most critical functions of an application work after a new build or deployment. The goal is not to test everything — it is to determine whether the build is stable enough to warrant further, more detailed testing. If the smoke tests fail, the build is rejected immediately, saving the team from wasting time on deeper testing of a fundamentally broken system.

The term originates from hardware engineering, where a new circuit board would be powered on to see if it literally started smoking. If it did not, the basic electrical connections were sound, and more detailed testing could proceed. In software, the analogy holds: smoke tests check whether the application starts up, the critical pages load, the API responds, and the core user flows complete without crashing.

Smoke testing is sometimes called “build verification testing” or “sanity testing,” though purists distinguish between these terms. A smoke test is typically a predefined set of critical path tests that runs automatically after every deployment. A sanity test is often a more targeted, ad-hoc check performed after a specific fix or change. In practice, most teams use the terms interchangeably.

How It Works

Smoke tests target the highest-impact, most critical functionality of the application. They should be fast (under five minutes), stable (never flaky), and broad (covering all major subsystems at a surface level). A smoke test suite for a typical web application might include:

  • The application starts and the home page loads.
  • User authentication works (login and logout).
  • The primary data flow completes (create, read, update, delete).
  • Payment processing is reachable (not necessarily charging a real card).
  • External service integrations respond (database, cache, message queue).

Here is a smoke test suite using Playwright in JavaScript:

// smoke.test.js
const { test, expect } = require("@playwright/test");

test.describe("Smoke Tests", () => {
  test("home page loads successfully", async ({ page }) => {
    const response = await page.goto("https://staging.example.com");
    expect(response.status()).toBe(200);
    await expect(page.locator("h1")).toBeVisible();
  });

  test("user can log in", async ({ page }) => {
    await page.goto("https://staging.example.com/login");
    await page.fill('[data-testid="email"]', "smoke@example.com");
    await page.fill('[data-testid="password"]', "test-password");
    await page.click('[data-testid="login-button"]');
    await expect(page.locator('[data-testid="dashboard"]')).toBeVisible();
  });

  test("API health check responds", async ({ request }) => {
    const response = await request.get(
      "https://staging.example.com/api/health"
    );
    expect(response.status()).toBe(200);
    const body = await response.json();
    expect(body.status).toBe("healthy");
    expect(body.database).toBe("connected");
  });

  test("critical data flow works", async ({ request }) => {
    const response = await request.post(
      "https://staging.example.com/api/items",
      {
        data: { name: "Smoke Test Item", category: "test" },
        headers: { Authorization: "Bearer smoke-test-token" },
      }
    );
    expect(response.status()).toBe(201);
  });
});

An API-level smoke test in Python using pytest and requests:

# test_smoke.py
import requests

BASE_URL = "https://staging.example.com"

class TestSmoke:
    def test_health_endpoint(self):
        response = requests.get(f"{BASE_URL}/api/health")
        assert response.status_code == 200
        data = response.json()
        assert data["status"] == "healthy"

    def test_authentication_flow(self):
        response = requests.post(f"{BASE_URL}/api/auth/login", json={
            "email": "smoke@example.com",
            "password": "test-password",
        })
        assert response.status_code == 200
        assert "token" in response.json()

    def test_database_connectivity(self):
        response = requests.get(f"{BASE_URL}/api/items?limit=1")
        assert response.status_code == 200

    def test_static_assets_served(self):
        response = requests.get(f"{BASE_URL}/assets/app.js")
        assert response.status_code == 200

Smoke tests are typically wired into the deployment pipeline and run automatically after each deployment:

# deployment pipeline
steps:
  - name: Deploy to staging
    run: deploy --env staging
  - name: Run smoke tests
    run: npx playwright test smoke.test.js
  - name: Deploy to production
    run: deploy --env production
    if: success()

If smoke tests fail after deployment to staging, the pipeline halts and production is never touched.

Why It Matters

Smoke testing serves as an automated gatekeeper that prevents broken builds from progressing through the deployment pipeline. Without smoke tests, a build with a misconfigured database connection string, a missing environment variable, or a broken dependency could be deployed to production and cause an outage.

The speed of smoke testing is its primary advantage. A full regression suite might take 30 minutes or longer. Smoke tests complete in under five minutes, providing near-instant feedback on whether a deployment is fundamentally sound. This rapid feedback loop allows teams to deploy frequently — multiple times per day — with confidence that critical functionality has been verified.

Smoke tests are especially valuable in continuous deployment pipelines where every merge to main triggers an automatic deployment. In these environments, there is no manual QA gate. Smoke tests act as the automated equivalent, catching catastrophic failures before they reach users.

Smoke testing also reduces mean time to recovery (MTTR) during incidents. When a deployment causes smoke tests to fail, the team knows immediately which deployment introduced the problem. They can roll back within minutes rather than spending hours diagnosing production errors reported by users.

Best Practices

  • Keep smoke tests fast. The entire suite should complete in under five minutes. If smoke tests take too long, developers will be tempted to skip them or deploy without waiting for results.
  • Test breadth, not depth. Smoke tests should touch every major subsystem (authentication, database, API, frontend) with a single, simple check. Deep validation of each subsystem is the job of integration and regression tests.
  • Make smoke tests environment-aware. Smoke tests should work against any environment (development, staging, production) by reading the base URL and credentials from environment variables or configuration files.
  • Never let smoke tests be flaky. A flaky smoke test that sometimes fails for no reason will be ignored, defeating the entire purpose. Smoke tests must be the most reliable tests in the suite. Remove or fix any smoke test that produces intermittent failures.
  • Include infrastructure checks. Beyond application functionality, verify that databases are reachable, caches are responding, and external service connections are active. Many production incidents are caused by infrastructure misconfiguration, not code bugs.

Common Mistakes

  • Making smoke tests too comprehensive. Smoke tests that try to cover every feature become slow and fragile. They cease to be smoke tests and become a poorly maintained regression suite. Keep the scope narrow: only the most critical user paths.
  • Not running smoke tests after every deployment. Some teams run smoke tests only before releases, missing the deployments where configuration changes, environment updates, or infrastructure migrations break the application.
  • Using production data in smoke tests. Smoke tests that create, modify, or delete real production data can cause data corruption or leak test artifacts to real users. Use dedicated test accounts, feature flags, or a staging environment with synthetic data.
  • Ignoring smoke test failures. When smoke tests fail and the team deploys anyway because “it is probably a flaky test,” they have negated the entire value of smoke testing. Investigate every failure, and fix or remove tests that are genuinely unreliable.

Related Terms

Learn More

Free Newsletter

Stay ahead with AI dev tools

Weekly insights on AI code review, static analysis, and developer productivity. No spam, unsubscribe anytime.

Join developers getting weekly AI tool insights.