Integration Testing
Testing how multiple components or services work together, verifying data flow and interactions between integrated units.
What Is Integration Testing?
Integration testing verifies that multiple components, modules, or services work correctly when combined. While unit tests validate individual functions in isolation, integration tests check the boundaries between those units — the API calls, database queries, message queues, and inter-service communication that connect a system’s parts into a working whole.
Consider a web application with a REST controller, a service layer, and a database repository. Each of these components might pass its unit tests individually, but integration testing reveals whether they actually work together: Does the controller correctly parse the request body and pass it to the service? Does the service call the repository with the right parameters? Does the repository’s SQL query return the expected data? These interaction points are where most production bugs live.
Integration testing sits in the middle of the test pyramid, between the large base of unit tests and the small top layer of end-to-end tests. Integration tests are slower than unit tests because they involve real dependencies — actual databases, file systems, or HTTP connections — but they are faster and more targeted than full end-to-end tests that exercise the entire application stack.
How It Works
Integration tests typically start up real (or containerized) versions of external dependencies and exercise the actual interactions between components. Unlike unit tests, they do not mock the boundaries — that is the whole point.
Here is an example using Jest to test a Node.js API endpoint that reads from a database:
// users.integration.test.js
const request = require("supertest");
const app = require("../app");
const db = require("../db");
beforeAll(async () => {
await db.migrate.latest();
});
beforeEach(async () => {
await db("users").del();
await db("users").insert([
{ id: 1, name: "Alice", email: "alice@example.com" },
{ id: 2, name: "Bob", email: "bob@example.com" },
]);
});
afterAll(async () => {
await db.destroy();
});
describe("GET /api/users", () => {
test("returns all users from the database", async () => {
const response = await request(app).get("/api/users");
expect(response.status).toBe(200);
expect(response.body).toHaveLength(2);
expect(response.body[0].name).toBe("Alice");
});
test("returns 404 for non-existent user", async () => {
const response = await request(app).get("/api/users/999");
expect(response.status).toBe(404);
});
});
And a Python example using pytest with a test database:
# test_user_service.py
import pytest
from app import create_app
from app.db import db, User
@pytest.fixture
def client():
app = create_app(testing=True)
with app.test_client() as client:
with app.app_context():
db.create_all()
db.session.add(User(name="Alice", email="alice@test.com"))
db.session.commit()
yield client
with app.app_context():
db.drop_all()
def test_get_users_returns_seeded_data(client):
response = client.get("/api/users")
assert response.status_code == 200
assert len(response.json) == 1
assert response.json[0]["name"] == "Alice"
def test_create_user_persists_to_database(client):
response = client.post("/api/users", json={
"name": "Bob", "email": "bob@test.com"
})
assert response.status_code == 201
response = client.get("/api/users")
assert len(response.json) == 2
Modern integration testing often uses Docker containers to spin up dependencies. Tools like Testcontainers allow tests to programmatically start a PostgreSQL, Redis, or Kafka instance, run the tests against it, and tear it down afterward.
Why It Matters
Unit tests verify that individual components do the right thing. Integration tests verify that they do the right thing together. This distinction matters because the vast majority of production bugs occur at the boundaries between components, not within individual functions.
A service might serialize a date as an ISO string while the database expects a Unix timestamp. A controller might return a JSON array while the frontend expects a paginated response envelope. A message consumer might handle single messages correctly but fail when it receives a batch. None of these bugs would be caught by unit tests alone.
Integration tests also validate configuration, connection strings, schema compatibility, and deployment artifacts. They are the first line of defense against the class of bugs that only appear when real components talk to each other. Teams that skip integration testing often find themselves debugging production issues that could have been caught before deployment.
Best Practices
- Use a dedicated test database. Never run integration tests against production or shared development databases. Use Docker containers or in-memory databases to create isolated, disposable test environments.
- Clean up between tests. Each test should start from a known state. Use transactions that roll back after each test, or truncate tables in a
beforeEachhook, to prevent test data from leaking between cases. - Keep integration tests focused. Test one integration boundary at a time. A test that exercises controller-to-database is an integration test. A test that exercises controller-to-database-to-cache-to-queue-to-email is an end-to-end test wearing a disguise.
- Run integration tests in CI. Integration tests are slower than unit tests but should still run on every pull request. Use parallel test runners and containerized dependencies to keep the feedback loop under five minutes.
- Separate integration tests from unit tests. Use different test directories or naming conventions (e.g.,
*.integration.test.js) so that developers can run fast unit tests locally and let CI handle the slower integration suite.
Common Mistakes
- Mocking the very thing you are testing. If you mock the database in a test that is supposed to verify database interactions, you are writing a unit test, not an integration test. Integration tests must use real dependencies to provide value.
- Not managing test data carefully. Tests that depend on pre-existing data in a shared database will fail unpredictably when that data changes. Always seed your own data and clean it up afterward.
- Making integration tests too broad. Tests that exercise the entire application stack from HTTP request to email delivery are slow, flaky, and hard to debug. Keep integration tests scoped to a specific interaction boundary.
- Ignoring test environment parity. Running integration tests against SQLite when production uses PostgreSQL creates a false sense of security. Differences in SQL dialects, constraint handling, and transaction behavior will bite you. Test against the same database engine you deploy to.
Related Terms
Learn More
Related Articles
Free Newsletter
Stay ahead with AI dev tools
Weekly insights on AI code review, static analysis, and developer productivity. No spam, unsubscribe anytime.
Join developers getting weekly AI tool insights.
Axolo
Codacy
Codara
CodeScene