Container Testing in CI/CD: GitHub Actions + Docker Best Practices
Container-based integration tests are slower in CI than you want them to be — mostly because of image pulls, layer rebuilds, and serial test execution. Fix that with layer caching, GitHub Actions service containers, and parallel job splitting. Here's exactly how.
Key Takeaways
GitHub Actions service containers are the fastest way to add PostgreSQL or Redis to CI. No Testcontainers, no Docker Compose — just declare the service and GitHub wires up the connection.
Layer caching cuts build times by 60-80%. The docker/build-push-action with GitHub Actions cache makes repeated builds fast.
Testcontainers works on GitHub-hosted runners without any extra setup. Docker is pre-installed on ubuntu-latest — just run your tests.
Parallel matrix jobs are the right tool for running multiple test suites simultaneously. Don't make sequential test stages wait for each other.
GitHub Actions Service Containers: The Simplest Approach
For simple integration tests that need PostgreSQL or Redis, GitHub Actions service containers are the fastest path:
name: Integration Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:15-alpine
env:
POSTGRES_DB: testdb
POSTGRES_USER: testuser
POSTGRES_PASSWORD: testpass
ports:
- 5432:5432
options: >-
--health-cmd pg_isready
--health-interval 5s
--health-timeout 5s
--health-retries 10
redis:
image: redis:7-alpine
ports:
- 6379:6379
options: >-
--health-cmd "redis-cli ping"
--health-interval 5s
--health-timeout 5s
--health-retries 10
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- run: npm ci
- name: Run migrations
run: npx prisma migrate deploy
env:
DATABASE_URL: postgresql://testuser:testpass@localhost:5432/testdb
- name: Run tests
run: npm test
env:
DATABASE_URL: postgresql://testuser:testpass@localhost:5432/testdb
REDIS_URL: redis://localhost:6379Service containers start before your job steps and are accessible at localhost on the mapped ports. The options with health checks ensure services are ready before your tests run.
No Docker Compose, no Testcontainers startup overhead — the containers are already running when your test step starts.
Layer Caching for Fast Builds
Without caching, docker build pulls base images and reinstalls dependencies on every CI run. With caching, only changed layers rebuild:
name: Build and Test
on: [push, pull_request]
jobs:
build-and-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build image with cache
uses: docker/build-push-action@v5
with:
context: .
target: test
tags: myapp:test
load: true
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Run tests in container
run: |
docker run --rm \
-e DATABASE_URL=${{ env.DATABASE_URL }} \
myapp:test \
npm testThe type=gha cache stores layer data in GitHub Actions cache storage. First run is slow (cold cache); subsequent runs skip unchanged layers. For a typical Node.js app, this cuts build time from 3-4 minutes to 30-60 seconds.
Testcontainers in CI: Zero Configuration
If you're already using Testcontainers locally, CI requires zero extra configuration:
name: Testcontainers Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest # Docker pre-installed
steps:
- uses: actions/checkout@v4
- uses: actions/setup-java@v4
with:
java-version: '21'
distribution: 'temurin'
- name: Run tests
run: ./mvnw verify
# Testcontainers starts containers automatically — nothing else neededubuntu-latest includes Docker out of the box. Testcontainers connects to the Docker daemon, starts your containers, runs tests, and cleans up. The output in CI looks identical to local.
For self-hosted runners, verify Docker is installed and the runner user has Docker socket access:
# On the runner host
<span class="hljs-built_in">sudo usermod -aG docker runnerParallel Test Execution with Matrix
Long-running test suites should split across parallel jobs:
name: Parallel Test Suite
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
suite: [unit, integration, e2e]
fail-fast: false # Don't cancel other suites if one fails
services:
postgres:
image: postgres:15-alpine
env:
POSTGRES_DB: testdb
POSTGRES_USER: testuser
POSTGRES_PASSWORD: testpass
ports:
- 5432:5432
options: --health-cmd pg_isready --health-interval 5s --health-timeout 5s --health-retries 10
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- run: npm ci
- name: Run ${{ matrix.suite }} tests
run: npm run test:${{ matrix.suite }}
env:
DATABASE_URL: postgresql://testuser:testpass@localhost:5432/testdbAll three suites start simultaneously. Total CI time equals the longest suite, not the sum.
Docker Compose in CI
When you need full multi-service environments that service containers can't handle:
name: Docker Compose Integration Tests
on: [push, pull_request]
jobs:
compose-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run integration tests
run: |
docker compose -f docker-compose.test.yml \
up --build --abort-on-container-exit \
--exit-code-from tests
- name: Collect logs on failure
if: failure()
run: docker compose -f docker-compose.test.yml logs
- name: Teardown
if: always()
run: docker compose -f docker-compose.test.yml down -vThe --exit-code-from tests flag makes the step fail if your test container exits non-zero. The if: always() teardown runs even on failure.
Caching Docker Compose Images
Docker Compose pulls images fresh on each CI run by default. Cache them:
- name: Cache Docker layers
uses: actions/cache@v4
with:
path: /tmp/.docker-cache
key: docker-${{ runner.os }}-${{ hashFiles('docker-compose.test.yml') }}
restore-keys: |
docker-${{ runner.os }}-
- name: Load cached images
run: |
if [ -d /tmp/.docker-cache ]; then
ls /tmp/.docker-cache/*.tar 2>/dev/null | xargs -I{} docker load -i {}
fi
- name: Run tests
run: docker compose -f docker-compose.test.yml up --build --abort-on-container-exit
- name: Save images to cache
if: always()
run: |
mkdir -p /tmp/.docker-cache
docker save postgres:15-alpine redis:7-alpine -o /tmp/.docker-cache/deps.tarCollecting Test Results
Export JUnit XML from your test containers for GitHub to display:
- name: Run tests
run: |
docker run --rm \
-v "$(pwd)/test-results:/app/test-results" \
myapp:test \
npm test -- --reporter=junit --output-file=/app/test-results/results.xml
- name: Publish test results
uses: mikepenz/action-junit-report@v4
if: always()
with:
report_paths: 'test-results/*.xml'
check_name: 'Test Results'
fail_on_failure: trueGitHub displays pass/fail counts directly in the PR — no need to dig through logs.
Security: Don't Expose Secrets in Docker Builds
A common CI mistake: baking secrets into Docker images via build args:
# ❌ Secret ends up in image layer history
ARG API_KEY
RUN curl -H "Authorization: $API_KEY" https://api.example.com/setupUse secrets for sensitive values:
# GitHub Actions with Docker buildx secrets
- name: Build with secret
uses: docker/build-push-action@v5
with:
secrets: |
api_key=${{ secrets.API_KEY }}# ✅ Secret available during build, not in image
RUN --mount=type=secret,id=api_key \
API_KEY=$(cat /run/secrets/api_key) \
curl -H "Authorization: $API_KEY" https://api.example.com/setupBuild secrets are available during the RUN command but not stored in the image layer.
Complete Production-Ready Pipeline
Combining everything:
name: CI Pipeline
on:
push:
branches: [main]
pull_request:
jobs:
lint-dockerfile:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: docker run --rm -i hadolint/hadolint < Dockerfile
unit-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: { node-version: '20', cache: 'npm' }
- run: npm ci && npm run test:unit
integration-tests:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:15-alpine
env: { POSTGRES_DB: testdb, POSTGRES_USER: testuser, POSTGRES_PASSWORD: testpass }
ports: ['5432:5432']
options: --health-cmd pg_isready --health-interval 5s --health-retries 10
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: { node-version: '20', cache: 'npm' }
- run: npm ci
- run: npm run test:integration
env:
DATABASE_URL: postgresql://testuser:testpass@localhost:5432/testdb
build-and-scan:
runs-on: ubuntu-latest
needs: [unit-tests, integration-tests]
steps:
- uses: actions/checkout@v4
- uses: docker/setup-buildx-action@v3
- uses: docker/build-push-action@v5
with:
context: .
tags: myapp:${{ github.sha }}
load: true
cache-from: type=gha
cache-to: type=gha,mode=max
- uses: aquasecurity/trivy-action@master
with:
image-ref: myapp:${{ github.sha }}
severity: CRITICAL,HIGH
exit-code: '1'Beyond Container Tests
Your CI pipeline confirms the container builds, passes integration tests, and has no critical vulnerabilities. The next question: does the deployed application actually work for users?
HelpMeTest monitors your production application with automated browser tests — verifying login flows, form submissions, and critical user workflows 24/7. When something breaks in production, you get an alert before your users notice. Free plan includes up to 10 tests with 5-minute monitoring intervals.