Mocking
A testing technique that replaces real dependencies (databases, APIs, services) with controlled fake objects to isolate the unit under test.
What Is Mocking?
Mocking is a testing technique where real dependencies — databases, external APIs, file systems, email services, payment gateways — are replaced with controlled, programmable substitutes that simulate the behavior of the real objects. This allows developers to test a unit of code in isolation, without requiring the actual dependency to be available, configured, or responsive.
The term “mock” is often used broadly to describe any test double, but technically there are several types. A mock is a pre-programmed object that verifies it was called with the expected arguments. A stub returns predefined data without verifying how it was called. A spy wraps the real implementation and records calls for later inspection. A fake is a working but simplified implementation (like an in-memory database instead of PostgreSQL). In practice, most developers and frameworks use “mock” as a catch-all term, and tools like Jest, unittest.mock, and Sinon provide all of these capabilities under a single mocking API.
Mocking is fundamental to unit testing. Without mocks, a test for a function that sends emails would actually send emails. A test for a function that charges credit cards would attempt real charges. A test for a function that queries a database would need a running, seeded database instance. Mocks eliminate these dependencies, making tests fast, deterministic, and isolated.
How It Works
Mocking replaces a dependency with a controlled substitute before the test runs, then restores the original after the test completes. The substitute can be configured to return specific values, throw errors, or track calls.
Here is a practical example in JavaScript using Jest. Consider a service that fetches user data from an API:
// userService.js
const axios = require("axios");
async function getUserProfile(userId) {
const response = await axios.get(`/api/users/${userId}`);
const user = response.data;
return {
displayName: `${user.firstName} ${user.lastName}`,
email: user.email,
memberSince: new Date(user.createdAt).getFullYear(),
};
}
module.exports = { getUserProfile };
Testing this without mocks would require a running API server. With mocks:
// userService.test.js
const axios = require("axios");
const { getUserProfile } = require("./userService");
jest.mock("axios");
describe("getUserProfile", () => {
test("transforms API response into display format", async () => {
axios.get.mockResolvedValue({
data: {
firstName: "Alice",
lastName: "Johnson",
email: "alice@example.com",
createdAt: "2022-03-15T10:00:00Z",
},
});
const profile = await getUserProfile(123);
expect(profile).toEqual({
displayName: "Alice Johnson",
email: "alice@example.com",
memberSince: 2022,
});
expect(axios.get).toHaveBeenCalledWith("/api/users/123");
});
test("propagates API errors", async () => {
axios.get.mockRejectedValue(new Error("Network Error"));
await expect(getUserProfile(123)).rejects.toThrow("Network Error");
});
});
The same approach in Python using unittest.mock:
# test_user_service.py
from unittest.mock import patch, MagicMock
from user_service import get_user_profile
@patch("user_service.requests.get")
def test_transforms_api_response(mock_get):
mock_get.return_value = MagicMock(
status_code=200,
json=lambda: {
"firstName": "Alice",
"lastName": "Johnson",
"email": "alice@example.com",
"createdAt": "2022-03-15T10:00:00Z",
}
)
profile = get_user_profile(123)
assert profile["displayName"] == "Alice Johnson"
assert profile["email"] == "alice@example.com"
mock_get.assert_called_once_with("/api/users/123")
@patch("user_service.requests.get")
def test_handles_api_failure(mock_get):
mock_get.side_effect = ConnectionError("Network Error")
with pytest.raises(ConnectionError):
get_user_profile(123)
Mocks can also simulate specific behaviors like timeouts, partial failures, and varying response times:
test("handles slow API responses gracefully", async () => {
axios.get.mockImplementation(
() => new Promise((resolve) =>
setTimeout(() => resolve({ data: { firstName: "Alice" } }), 5000)
)
);
await expect(getUserProfile(123)).rejects.toThrow("Request timeout");
});
test("retries on transient failures", async () => {
axios.get
.mockRejectedValueOnce(new Error("503 Service Unavailable"))
.mockRejectedValueOnce(new Error("503 Service Unavailable"))
.mockResolvedValueOnce({ data: { firstName: "Alice", lastName: "J" } });
const profile = await getUserProfile(123);
expect(profile.displayName).toBe("Alice J");
expect(axios.get).toHaveBeenCalledTimes(3);
});
Why It Matters
Mocking enables fast, reliable unit tests by removing dependencies on infrastructure and external services. Without mocks, every test that interacts with a database needs a running database. Every test that calls an API needs a responsive server. Every test that sends email needs an SMTP connection. These dependencies make tests slow (network and I/O latency), flaky (the external service might be down), and expensive (infrastructure costs for test environments).
With mocks, unit tests run in milliseconds, require no infrastructure, and produce the same result every time. This speed and reliability enable developers to run tests continuously — after every save, in pre-commit hooks, and in CI — without waiting for external systems.
Mocking also enables testing of error conditions that are difficult or impossible to reproduce with real systems. How does your code behave when the database returns a connection timeout? When the API returns a 500 error? When the file system is full? Mocks let you simulate these scenarios with a single line of configuration, ensuring that error handling logic is tested as thoroughly as happy-path logic.
Testing edge cases in third-party services is another key use case. Stripe’s payment API might return any of 50 error codes, each requiring different handling. Testing all 50 error codes against the real API would be slow, unreliable, and potentially expensive. Mocking the Stripe client lets you test each error code in milliseconds.
Best Practices
- Mock at the boundary, not everywhere. Mock external dependencies (HTTP clients, database drivers, third-party SDKs) at the edge of your system. Do not mock internal functions or classes — that couples your tests to implementation details.
- Verify mock interactions when relevant. Use
expect(mock).toHaveBeenCalledWith(expectedArgs)when the test’s purpose is to verify that a dependency was called correctly. Skip interaction verification when the test is about the return value, not the call itself. - Reset mocks between tests. Use
beforeEach(() => jest.clearAllMocks())or equivalent to prevent mock state from leaking between tests. Leaked mock state is a common source of test flakiness and ordering dependencies. - Prefer dependency injection over module mocking. When possible, pass dependencies as function parameters or constructor arguments rather than mocking module-level imports. This makes mocks explicit and avoids framework-specific mocking magic.
- Keep mock data realistic. Use data that resembles real API responses, not placeholder values like
"test"or123. Realistic mock data catches format and parsing issues that synthetic data would miss.
Common Mistakes
- Over-mocking. When a test mocks every dependency including internal functions, it is testing the mock setup, not the code. If changing the implementation (while preserving behavior) breaks the test, you are mocking too much.
- Mocking what you do not own without a contract. If you mock a third-party API and the real API changes its response format, your tests will still pass while production breaks. Use contract tests or periodically run integration tests against the real API to keep mocks accurate.
- Not testing the real integration. Mocks are substitutes for unit tests. They do not replace the need for integration tests that verify real interactions. A mock tells you that your code handles the expected response correctly; an integration test tells you that the real service actually returns that response.
- Creating brittle mock expectations. Tests that assert on the exact number of calls, the exact order of arguments, and every property of every argument are fragile. When the implementation changes to add a new header or reorder parameters, these tests break without any behavioral change. Assert on what matters and ignore what does not.
Related Terms
Learn More
Related Articles
Free Newsletter
Stay ahead with AI dev tools
Weekly insights on AI code review, static analysis, and developer productivity. No spam, unsubscribe anytime.
Join developers getting weekly AI tool insights.
Axolo
Codacy
Codara
CodeScene