Smoke Testing: What It Is, Examples, and How It Differs From Sanity Testing
You just deployed. Before your QA team runs four hours of regression tests, there's one question worth asking: is the build even worth testing? Smoke tests answer that in five minutes — and save everyone from discovering the login page is broken two hours into a full test run.
Key Takeaways
Smoke tests answer one question: is the build fundamentally broken? Login, core navigation, API health, critical data loading — if these fail, stop and fix before running anything else.
Smoke tests should complete in under 5 minutes. Speed is the point — a smoke suite that takes 30 minutes defeats its purpose as an early-warning system.
Run smoke tests before every deployment and after every major merge. They're the cheapest insurance against wasting your team's time on a build that shouldn't have been deployed.
Smoke testing and sanity testing are different things. Smoke tests validate the build; sanity tests verify specific fixes after a change. Both matter, and conflating them leads to insufficient coverage at both levels.
Smoke testing is a preliminary testing technique that verifies the basic, critical functionality of a software build before deeper testing begins. The name comes from hardware testing — engineers would power on a circuit board and check if it smoked. If it did, there was no point testing further.
In software, smoke tests answer: is the build even worth testing? If the login page is broken, the payment flow doesn't load, or the API returns 500 on every request, there's no point running your full regression suite. Smoke tests catch these fundamental failures fast.
This guide covers what smoke testing is, how it differs from sanity testing, what to include in a smoke test suite, examples, and how to integrate smoke tests into CI/CD.
What Is Smoke Testing?
Smoke testing is a rapid, surface-level check that the most fundamental functions of a build are working. It's the first test run after a new build or deployment, designed to catch critical failures before investing time in comprehensive testing.
Key characteristics of smoke tests:
- Fast: completes in 5-10 minutes or less
- Broad but shallow: covers many features at a high level, not edge cases
- Binary: either the smoke test passes (proceed to deeper testing) or fails (investigate the build)
- Stable: smoke tests should almost never produce false positives
A smoke test doesn't verify that everything works correctly — it verifies that nothing is catastrophically broken.
What Smoke Testing Is NOT
- A replacement for regression testing (too shallow)
- A comprehensive functional test suite
- A performance or security test
- A test of edge cases or complex user flows
The Smoke Test Gate
Smoke testing creates a quality gate in your testing pipeline:
Build → Deploy to staging → Smoke tests run
↓ FAIL: investigate build
↓ PASS: run full regression suite
This saves time. If smoke fails (the API is returning 503, the database connection is broken, the login form is missing), there's no value in running 2,000 regression tests. Fix the fundamental problem first.
Smoke Testing vs Sanity Testing
Smoke testing and sanity testing are frequently confused. They're related but serve different purposes.
| Dimension | Smoke Testing | Sanity Testing |
|---|---|---|
| Purpose | Verify the build is stable enough to test | Verify a specific bug fix or new feature works |
| Scope | Broad — many features, high level | Narrow — specific area of functionality |
| Depth | Shallow — basic checks only | Deep — detailed verification |
| When run | After every new build/deployment | After a specific bug fix or change |
| Who runs it | QA team or automated CI | QA team (often manual) |
| Failure outcome | Reject the build, don't proceed | Investigate the specific feature |
| Duration | 5-10 minutes | 15-30 minutes |
Smoke Testing Example
After deploying a new build, run smoke tests that verify:
- The home page loads (HTTP 200)
- Login works with valid credentials
- The main dashboard renders
- The API health check responds
- A basic product search returns results
Sanity Testing Example
After a bug fix for "checkout fails when discount code contains lowercase letters," run sanity tests that verify:
- Lowercase discount codes work:
summer20→ applies 20% discount ✓ - Uppercase discount codes still work:
SUMMER20→ applies 20% discount ✓ - Mixed case codes work:
Summer20→ applies 20% discount ✓ - Invalid codes still fail:
NOTREAL→ shows error message ✓
Smoke testing = broad check that everything generally works. Sanity testing = deep check that a specific thing works correctly.
What to Include in Smoke Tests
Selection Criteria
Good smoke test candidates are:
- Critical to the user experience: if this breaks, users can't use the product
- High dependency: other features depend on this working (authentication, navigation, API gateway)
- Prone to catastrophic failures: things that break silently or take down everything else
- Easily automated: simple, deterministic behavior with clear pass/fail criteria
Typical Smoke Test Checklist
Infrastructure:
- Application starts and responds to health check endpoint
- Database connection is healthy
- External service dependencies (Redis, message queue) are reachable
- Required environment variables are present
Authentication:
- Login page loads
- Valid credentials authenticate successfully
- Invalid credentials are rejected
- Session/token is created and persists
Core navigation:
- Home/dashboard loads after login
- Main navigation links resolve (no 404s)
- Critical pages render without JavaScript errors
Primary features (one happy-path check per feature):
- Product/content listing returns data
- Search returns results
- Create/submit primary entity (order, ticket, document)
- View/read primary entity
API (for backend services):
- Health endpoint returns 200
- Authentication endpoint works
- Primary data endpoint returns expected shape
- Error handling: 404 for missing resource, 401 for missing auth
Smoke Test Examples
API Smoke Tests (k6)
// smoke-test.js
import http from 'k6/http';
import { check } from 'k6';
export const options = {
vus: 1, // Single user
duration: '30s', // 30 second check
thresholds: {
checks: ['rate==1.0'], // All checks must pass
},
};
const BASE_URL = __ENV.BASE_URL || 'http://localhost:3000';
export default function () {
// Health check
const healthRes = http.get(`${BASE_URL}/health`);
check(healthRes, {
'health check passes': (r) => r.status === 200,
'health check shows healthy': (r) => JSON.parse(r.body).status === 'healthy',
});
// Authentication
const loginRes = http.post(`${BASE_URL}/api/auth/login`, JSON.stringify({
email: 'smoke@test.example.com',
password: 'smoke-test-password',
}), { headers: { 'Content-Type': 'application/json' } });
check(loginRes, {
'login succeeds': (r) => r.status === 200,
'token returned': (r) => JSON.parse(r.body).token !== undefined,
});
const token = JSON.parse(loginRes.body).token;
const authHeaders = { Authorization: `Bearer ${token}`, 'Content-Type': 'application/json' };
// Primary data endpoint
const productsRes = http.get(`${BASE_URL}/api/products`, { headers: authHeaders });
check(productsRes, {
'products endpoint responds': (r) => r.status === 200,
'products returns array': (r) => Array.isArray(JSON.parse(r.body)),
});
}
E2E Smoke Tests (Playwright)
// tests/smoke/smoke.spec.ts
import { test, expect } from '@playwright/test';
test.describe('Smoke Tests', () => {
test('home page loads', async ({ page }) => {
await page.goto('/');
await expect(page).toHaveTitle(/MyApp/);
await expect(page.locator('nav')).toBeVisible();
});
test('login flow works', async ({ page }) => {
await page.goto('/login');
await page.fill('[name="email"]', process.env.SMOKE_TEST_EMAIL!);
await page.fill('[name="password"]', process.env.SMOKE_TEST_PASSWORD!);
await page.click('[type="submit"]');
await expect(page).toHaveURL('/dashboard');
});
test('dashboard loads after login', async ({ page }) => {
// Assumes auth fixture handles login
await page.goto('/dashboard');
await expect(page.locator('[data-testid="dashboard-header"]')).toBeVisible();
await expect(page.locator('[data-testid="main-content"]')).toBeVisible();
});
test('critical API endpoint responds', async ({ request }) => {
const response = await request.get('/api/health');
expect(response.status()).toBe(200);
const body = await response.json();
expect(body.status).toBe('ok');
});
});
Minimal pytest Smoke Tests
# tests/smoke/test_smoke.py
import pytest
import httpx
BASE_URL = "http://localhost:8000"
@pytest.mark.smoke
def test_health_check():
response = httpx.get(f"{BASE_URL}/health")
assert response.status_code == 200
assert response.json()["status"] == "healthy"
@pytest.mark.smoke
def test_authentication():
response = httpx.post(f"{BASE_URL}/api/auth/login", json={
"email": "smoke@test.example.com",
"password": "smoke-test-password"
})
assert response.status_code == 200
assert "token" in response.json()
@pytest.mark.smoke
def test_primary_endpoint_accessible():
# Get token
token = get_auth_token()
response = httpx.get(
f"{BASE_URL}/api/products",
headers={"Authorization": f"Bearer {token}"}
)
assert response.status_code == 200
assert isinstance(response.json(), list)
Smoke Testing in CI/CD
Smoke tests should run at multiple points in your deployment pipeline:
PR opened → Unit + integration tests
PR merged → Build + smoke tests on staging
Smoke passes → Full regression + E2E tests
All pass → Deploy to production
Deployed → Smoke tests on production
GitHub Actions Integration
# .github/workflows/deploy.yml
name: Deploy Pipeline
on:
push:
branches: [main]
jobs:
smoke-tests:
runs-on: ubuntu-latest
needs: [build-and-deploy-staging]
steps:
- uses: actions/checkout@v4
- name: Install k6
run: |
sudo gpg -k
sudo gpg --no-default-keyring --keyring /usr/share/keyrings/k6-archive-keyring.gpg \
--keyserver hkp://keyserver.ubuntu.com:80 --recv-keys C5AD17C747E3415A3642D57D77C6C491D6AC1D69
echo "deb [signed-by=/usr/share/keyrings/k6-archive-keyring.gpg] https://dl.k6.io/deb stable main" \
| sudo tee /etc/apt/sources.list.d/k6.list
sudo apt-get update && sudo apt-get install k6
- name: Run smoke tests
run: k6 run tests/smoke/smoke.js
env:
BASE_URL: ${{ secrets.STAGING_URL }}
K6_SMOKE_EMAIL: ${{ secrets.SMOKE_TEST_EMAIL }}
K6_SMOKE_PASSWORD: ${{ secrets.SMOKE_TEST_PASSWORD }}
regression-tests:
needs: [smoke-tests] # Only runs if smoke passes
...
deploy-production:
needs: [regression-tests]
...
production-smoke:
needs: [deploy-production]
steps:
- name: Production smoke test
run: k6 run tests/smoke/smoke.js
env:
BASE_URL: https://myapp.com
Post-Deployment Smoke Testing
Always run smoke tests after deploying to production. Not the full suite — just the critical checks that verify the deployment succeeded:
post-deploy-smoke:
name: Production Smoke Test
runs-on: ubuntu-latest
steps:
- name: Verify production health
run: |
curl -f https://myapp.com/health || exit 1
echo "Health check passed"
- name: Run minimal smoke suite
run: k6 run --vus 1 --duration 30s tests/smoke/production-smoke.js
env:
BASE_URL: https://myapp.com
Manual vs Automated Smoke Testing
When to Automate Smoke Tests (Almost Always)
Automated smoke tests run in seconds or minutes, consistently, on every deployment. They're deterministic and don't get tired or skip steps. For any team deploying more than once a week, automated smoke tests are essential.
Benefits of automation:
- Runs on every deployment without human intervention
- Consistent — no steps skipped under deadline pressure
- Fast — a well-written automated smoke test runs in 2-5 minutes
- Blocks deployments when critical functionality breaks
When Manual Smoke Testing Makes Sense
Manual smoke testing is appropriate for:
- First deployment of a new product (no automation infrastructure yet)
- Features that are genuinely hard to automate (specific hardware interactions, visual inspection)
- Exploratory pre-release checks to verify UX beyond just functionality
Even when manual smoke testing is necessary, it should be guided by a written checklist — not improvised. Unstructured manual smoke testing is inconsistent and incomplete.
FAQ
What is smoke testing?
Smoke testing is a type of software testing that performs a quick, preliminary check of the most critical functions of a build to determine if it's stable enough for further testing. It verifies that the basic, essential features work — login, page loads, API responses — without testing edge cases or complex flows. A failed smoke test means the build should be investigated before proceeding.
What is the purpose of smoke testing?
The purpose of smoke testing is to quickly identify fundamental failures in a build before investing time in comprehensive testing. If smoke tests fail, there's no value in running the full regression suite. Smoke tests also provide a deployment gate — verifying that a deployment to staging or production didn't break critical functionality.
What is the difference between smoke testing and sanity testing?
Smoke testing is broad and shallow — it covers many features at a high level to verify the overall build is stable. Sanity testing is narrow and deep — it focuses on a specific feature or bug fix to verify it works correctly. Smoke testing gates the entire test process; sanity testing validates a specific change.
What should I include in a smoke test?
Include tests for the most critical user-facing functionality: health checks, authentication (login/logout), main navigation, primary data endpoints, and the core workflow unique to your application. Smoke tests should cover 10-20 critical paths and complete in under 5 minutes. Don't test edge cases or optional features — those belong in regression tests.
How often should smoke tests run?
Smoke tests should run after every deployment — automatically. For web applications, this means after every deployment to staging and after every deployment to production. Many teams also run smoke tests on pull requests to catch build-breaking changes before they merge to main.
Should smoke tests run in production?
Yes — running smoke tests in production after deployment is one of the most valuable testing practices. Production smoke tests verify the deployment succeeded and critical functionality is working with real infrastructure (production database, production CDN, production third-party integrations). Use dedicated test accounts, not real user data, and clean up any test artifacts after the smoke tests complete.
Conclusion
Smoke testing is the most efficient quality gate you can add to your deployment pipeline. A 5-minute automated smoke test that runs on every deployment prevents catastrophic failures from reaching users — catching the build-breaking bugs that render everything else irrelevant.
Keep your smoke suite small and stable. Include only the critical paths. Automate everything. Run it on every deployment, including production. If smoke fails, fix it before anything else.
HelpMeTest makes continuous smoke testing straightforward — schedule automated tests to run after every deployment and get immediate alerts when critical functionality breaks, with health monitoring between deployments to catch infrastructure issues before users do.
Next steps:
- Regression Testing Guide — build comprehensive coverage beyond smoke
- E2E Testing Guide — automate full user flows
- CI/CD Pipeline Testing Guide — structure your complete testing pipeline