Testcontainers vs Docker Compose for Integration Testing
Integration tests need real infrastructure — a real Postgres, a real Redis, a real Kafka. The question isn't whether to use Docker, it's how to manage the Docker lifecycle in your tests.
Two approaches dominate: Docker Compose for external service management, and Testcontainers for programmatic container lifecycle inside your tests. Both work, both have trade-offs, and many teams use both for different scenarios.
How They Work
Docker Compose defines services in a YAML file. You docker compose up before tests, run tests, then docker compose down. Services are external to your test process — they start once and stay running for the entire test run.
Testcontainers starts containers programmatically from within test code. Each test class or test suite starts its own containers, which are automatically stopped and removed when tests finish. Containers are part of the test process lifecycle.
# Docker Compose approach
# 1. docker compose up -d postgres
# 2. Run tests that connect to postgres://localhost:5432/testdb
# 3. docker compose down
# Testcontainers approach
from testcontainers.postgres import PostgresContainer
def test_user_repository():
with PostgresContainer("postgres:15") as postgres:
# Container starts here, gets a random port
connection_url = postgres.get_connection_url()
repo = UserRepository(connection_url)
user = repo.create({"name": "Alice", "email": "alice@example.com"})
assert user.id is not None
found = repo.find_by_email("alice@example.com")
assert found.name == "Alice"
# Container stops and is removed hereKey Comparison
| Aspect | Testcontainers | Docker Compose |
|---|---|---|
| Container lifecycle | Per-test or per-suite | External to tests |
| Port management | Dynamic (random ports) | Static (configured) |
| Setup required | Library install only | Compose file + script |
| Test isolation | High (fresh container per suite) | Lower (shared state) |
| Speed | Slower (container startup per run) | Faster (starts once) |
| CI complexity | Lower | Higher (manage startup/teardown) |
| Parallelism | Natural (each worker gets its own) | Manual coordination required |
| Resource usage | Higher (multiple containers) | Lower (shared containers) |
| Reuse across tests | Via reuse=True flag |
Always |
When Docker Compose Wins
Fast Local Development
If you're running tests dozens of times per hour during development, Docker Compose's "start once, run many" model is faster:
# docker-compose.test.yml
services:
postgres:
image: postgres:15
environment:
POSTGRES_DB: testdb
POSTGRES_USER: test
POSTGRES_PASSWORD: test
ports:
- "5432:5432"
healthcheck:
test: ["CMD", "pg_isready", "-U", "test"]
interval: 1s
timeout: 3s
retries: 10
redis:
image: redis:7
ports:
- "6379:6379"# Start once in the morning
docker compose -f docker-compose.test.yml up -d
<span class="hljs-comment"># Run tests fast (no container startup overhead)
pytest tests/integration/ -v
<span class="hljs-comment"># Tear down at end of day
docker compose -f docker-compose.test.yml downContainer startup for Postgres can take 2-5 seconds. If your test suite runs 100 times per day during active development, those seconds add up.
Complex Multi-Service Dependencies
When your tests require a specific topology — service A talks to service B which writes to service C — Docker Compose's networking makes this straightforward:
services:
kafka:
image: confluentinc/cp-kafka:7.5.0
depends_on: [zookeeper]
zookeeper:
image: confluentinc/cp-zookeeper:7.5.0
schema-registry:
image: confluentinc/cp-schema-registry:7.5.0
depends_on: [kafka]
app:
build: .
depends_on: [kafka, schema-registry]
environment:
KAFKA_BOOTSTRAP_SERVERS: kafka:9092Testcontainers can handle this, but networks and container dependencies require more code.
Shared State Required
Some integration tests intentionally share state — testing that messages published by test A are consumed by test B. With Docker Compose, all tests share the same broker. With Testcontainers per-suite isolation, you'd need to coordinate explicitly.
When Testcontainers Wins
CI/CD Without Pre-Setup
Docker Compose requires your CI pipeline to manage the container lifecycle around test execution. Testcontainers is self-contained:
# CI with Testcontainers — nothing extra needed
jobs:
test:
runs-on: ubuntu-latest # Docker is pre-installed
steps:
- uses: actions/checkout@v4
- run: pip install pytest testcontainers
- run: pytest tests/integration/
# Testcontainers handles all container lifecycle# CI with Docker Compose — need to manage startup/teardown
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: docker compose -f docker-compose.test.yml up -d
- run: docker compose -f docker-compose.test.yml exec app sh -c 'until pg_isready; do sleep 1; done'
- run: pytest tests/integration/
- if: always()
run: docker compose -f docker-compose.test.yml downTestcontainers eliminates the health-check polling and the always-cleanup teardown step.
Parallel Test Execution
When running tests in parallel (multiple pytest workers, multiple CI jobs), Testcontainers containers are isolated by design:
# Each pytest-xdist worker gets its own container — no port conflicts
# worker 1: postgres on random port 54321
# worker 2: postgres on random port 54876
# worker 3: postgres on random port 55012
@pytest.fixture(scope="session")
def postgres_container():
with PostgresContainer("postgres:15") as container:
yield containerWith Docker Compose and static ports, parallel workers fight over port 5432. You either serialize tests or add per-worker port offsets, which is messy.
Test Isolation and Reproducibility
Each Testcontainers test suite gets a fresh database. No cleanup between tests, no leaked state from previous runs:
# Every test class gets a clean Postgres
class TestUserRepository:
@pytest.fixture(autouse=True)
def setup_db(self):
with PostgresContainer("postgres:15") as container:
self.db = create_engine(container.get_connection_url())
Base.metadata.create_all(self.db)
yield
# Clean state — container is removed, no cleanup SQL needed
class TestOrderRepository:
@pytest.fixture(autouse=True)
def setup_db(self):
with PostgresContainer("postgres:15") as container:
# Completely separate Postgres, no interference from TestUserRepository
...With Docker Compose, you need TRUNCATE scripts or transaction rollbacks to reset state between test classes.
Testcontainers Examples by Language
Python
from testcontainers.postgres import PostgresContainer
from testcontainers.redis import RedisContainer
from testcontainers.kafka import KafkaContainer
# Postgres
with PostgresContainer("postgres:15") as postgres:
engine = create_engine(postgres.get_connection_url())
# Redis
with RedisContainer("redis:7") as redis:
client = Redis(host=redis.get_container_host_ip(),
port=redis.get_exposed_port(6379))
# Kafka
with KafkaContainer("confluentinc/cp-kafka:7.5.0") as kafka:
bootstrap_servers = kafka.get_bootstrap_server()Java (Spring Boot)
@Testcontainers
@SpringBootTest
class OrderServiceIntegrationTest {
@Container
static PostgreSQLContainer<?> postgres = new PostgreSQLContainer<>("postgres:15")
.withDatabaseName("testdb")
.withUsername("test")
.withPassword("test");
@DynamicPropertySource
static void configureProperties(DynamicPropertyRegistry registry) {
registry.add("spring.datasource.url", postgres::getJdbcUrl);
registry.add("spring.datasource.username", postgres::getUsername);
registry.add("spring.datasource.password", postgres::getPassword);
}
@Autowired
private OrderService orderService;
@Test
void createOrderPersistsToDatabase() {
Order order = orderService.create(new CreateOrderRequest(1L, 2));
assertThat(order.getId()).isNotNull();
}
}Node.js
import { PostgreSqlContainer } from "@testcontainers/postgresql";
import { RedisContainer } from "@testcontainers/redis";
describe("UserRepository", () => {
let postgresContainer: PostgreSqlContainer;
beforeAll(async () => {
postgresContainer = await new PostgreSqlContainer("postgres:15")
.withDatabase("testdb")
.start();
}, 30000); // 30 second timeout for container start
afterAll(async () => {
await postgresContainer.stop();
});
it("creates and retrieves users", async () => {
const pool = new Pool({ connectionString: postgresContainer.getConnectionUri() });
const repo = new UserRepository(pool);
const user = await repo.create({ name: "Alice", email: "alice@test.com" });
expect(user.id).toBeDefined();
});
});Performance Optimization: Container Reuse
Testcontainers supports container reuse across test runs to get Docker Compose-level speed:
# Enable reuse (containers persist between runs)
with PostgresContainer("postgres:15").with_kwargs(reuse=True) as container:
...Or via environment variable:
TESTCONTAINERS_REUSE_ENABLE=true pytest tests/integration/With reuse enabled, Testcontainers only starts a new container if none with the matching configuration exists. Subsequent runs skip the startup cost.
Recommendation
Use Testcontainers as your default. It's simpler to set up in CI, naturally handles parallel execution, and produces more isolated and reproducible tests.
Use Docker Compose when:
- Local development speed is critical and your test suite is run frequently
- You have a complex multi-container topology that's hard to express in code
- Some tests require shared state across test classes
- Your team's CI environment doesn't support Docker-in-Docker (DinD)
Many mature codebases use both: Docker Compose for a persistent local dev environment and Testcontainers for CI and cross-platform compatibility.