Allure Report: Beautiful Test Reports for Any Framework
Allure Report transforms test results from dry pass/fail summaries into rich, interactive HTML reports with steps, attachments, trends, and failure categorization. It integrates with virtually every testing framework and makes test results understandable for developers, QA leads, and non-technical stakeholders alike.
What Makes Allure Reports Useful
Plain test runner output (17 passed, 3 failed) tells you almost nothing about what broke or why. Allure provides:
- Step-by-step breakdown of what each test did
- Screenshots, logs, and videos attached to failed tests
- Severity and priority labels for triage
- Trend charts showing pass rate over time
- Categories that group failures by type (product bugs vs. test issues)
- Test history — how many times a test has passed or failed
- Environment information — which browser, OS, and app version was tested
The reports are static HTML — no server required, easy to publish to any static host or CI artifact.
Architecture
Allure works in two phases:
- Collection: Your tests write raw JSON results to an
allure-results/directory during the test run. Each framework has an adapter (listener/plugin) that hooks into the runner and writes these files. - Report generation: The
allureCLI readsallure-results/and generates a complete HTML report inallure-report/.
Tests run → allure-results/*.json → allure generate → allure-report/index.htmlInstallation
Allure CLI
The report generator is a Java-based CLI tool:
# macOS with Homebrew
brew install allure
<span class="hljs-comment"># Linux/Windows via npm
npm install --global allure-commandline
<span class="hljs-comment"># Verify
allure --versionYou need Java 8+ installed.
Framework Adapters
Install the adapter for your framework:
# Jest
npm install --save-dev jest-allure2-reporter
<span class="hljs-comment"># Mocha
npm install --save-dev allure-mocha
<span class="hljs-comment"># Playwright
npm install --save-dev allure-playwright
<span class="hljs-comment"># pytest (Python)
pip install allure-pytest
<span class="hljs-comment"># JUnit 5 (Java/Maven)
<span class="hljs-comment"># Add to pom.xml:
<span class="hljs-comment"># <dependency>
<span class="hljs-comment"># <groupId>io.qameta.allure</groupId>
<span class="hljs-comment"># <artifactId>allure-junit5</artifactId>
<span class="hljs-comment"># <version>2.25.0</version>
<span class="hljs-comment"># </dependency>Setup by Framework
Jest
// jest.config.js
module.exports = {
reporters: [
'default',
['jest-allure2-reporter', {
resultsDir: 'allure-results',
testCase: {
labels: {
suite: ({ displayName }) => displayName,
},
attachments: {
includeSourceCode: false,
},
},
}]
],
};Run tests:
npx jest
allure generate allure-results --clean -o allure-report
allure open allure-reportMocha
// .mocharc.js
module.exports = {
reporter: 'allure-mocha',
reporterOptions: 'resultsDir=allure-results',
};Playwright
// playwright.config.js
export default {
reporter: [
['allure-playwright', {
detail: true,
outputFolder: 'allure-results',
suiteTitle: false,
}]
],
};Playwright's allure adapter automatically attaches screenshots and traces on failure.
pytest
# pytest.ini
[pytest]
addopts = --alluredir=allure-results --clean-alluredirRun and generate:
pytest tests/
allure serve allure-results # Opens report in browser immediately
<span class="hljs-comment"># OR
allure generate allure-results --clean -o allure-reportJUnit 5 with Maven
<!-- pom.xml -->
<dependencies>
<dependency>
<groupId>io.qameta.allure</groupId>
<artifactId>allure-junit5</artifactId>
<version>2.25.0</version>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>io.qameta.allure</groupId>
<artifactId>allure-maven</artifactId>
<version>2.12.0</version>
</plugin>
</plugins>
</build>mvn test
mvn allure:report
<span class="hljs-comment"># Report at target/site/allure-maven-plugin/index.htmlEnriching Reports with Metadata
The basic adapter gives you pass/fail with timing. Add metadata to make reports genuinely useful:
JavaScript (jest-allure2-reporter / allure-js-commons)
import { allure } from 'allure-js-commons';
test('user can complete checkout', async () => {
allure.severity('critical');
allure.label('feature', 'Checkout');
allure.label('story', 'Payment Processing');
allure.description('Verifies the full checkout flow from cart to confirmation.');
await allure.step('Add item to cart', async () => {
await page.click('[data-testid="add-to-cart"]');
await expect(page.locator('.cart-count')).toHaveText('1');
});
await allure.step('Proceed to checkout', async () => {
await page.click('[data-testid="checkout-btn"]');
await page.waitForURL('**/checkout');
});
await allure.step('Complete payment', async () => {
await page.fill('[name="card"]', '4242424242424242');
await page.fill('[name="expiry"]', '12/28');
await page.fill('[name="cvc"]', '123');
await page.click('[type="submit"]');
await page.waitForURL('**/confirmation');
});
// Attach screenshot
const screenshot = await page.screenshot();
allure.attachment('Confirmation page', screenshot, 'image/png');
});Python (allure-pytest)
import allure
@allure.severity(allure.severity_level.CRITICAL)
@allure.feature('Authentication')
@allure.story('Login with valid credentials')
def test_login():
with allure.step('Open login page'):
driver.get('https://example.com/login')
with allure.step('Enter credentials'):
driver.find_element(By.ID, 'username').send_keys('user@example.com')
driver.find_element(By.ID, 'password').send_keys('secret')
driver.find_element(By.ID, 'submit').click()
with allure.step('Verify redirect to dashboard'):
assert '/dashboard' in driver.current_url
# Attach screenshot
allure.attach(
driver.get_screenshot_as_png(),
name='Dashboard',
attachment_type=allure.attachment_type.PNG
)Failure Categories
Categories let you distinguish between real bugs and test infrastructure issues:
Create categories.json in your allure-results/ directory:
[
{
"name": "Product bugs",
"matchedStatuses": ["failed"],
"messageRegex": ".*AssertionError.*"
},
{
"name": "Test defects",
"matchedStatuses": ["broken"],
"messageRegex": ".*NoSuchElementException.*|.*TimeoutError.*"
},
{
"name": "Ignored tests",
"matchedStatuses": ["skipped"],
"traceRegex": ".*"
}
]This separates test failures (the product is broken) from broken tests (selector changed, timing issue) in the Allure dashboard.
CI/CD Integration
GitHub Actions
name: Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
- run: npm ci
- name: Run tests
run: npx playwright test
- name: Generate Allure report
if: always()
uses: simple-elf/allure-report-action@master
with:
allure_results: allure-results
gh_pages: gh-pages
allure_report: allure-report
allure_history: allure-history
keep_reports: 20
- name: Deploy to GitHub Pages
if: always()
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_branch: gh-pages
publish_dir: allure-historyThis publishes reports to GitHub Pages with trend history maintained across runs.
Jenkins
pipeline {
agent any
stages {
stage('Test') {
steps {
sh 'npm ci && npx playwright test'
}
}
}
post {
always {
allure([
includeProperties: false,
jdk: '',
results: [[path: 'allure-results']]
])
}
}
}Jenkins has a native Allure plugin that displays trend charts directly in the build dashboard.
Allure TestOps: The SaaS Version
Allure TestOps (paid, cloud-hosted) extends the open-source report with:
- Test case management
- Persistent history without re-generating reports
- Team collaboration on test results
- Integration with Jira for bug tracking
- Test plan creation and execution tracking
For teams using Allure heavily and wanting persistence beyond CI artifacts, TestOps is worth evaluating. The open-source version is more than sufficient for most teams.
Serving Reports
# Generate static report
allure generate allure-results --clean -o allure-report
<span class="hljs-comment"># Serve locally (opens browser)
allure open allure-report
<span class="hljs-comment"># OR
allure serve allure-results <span class="hljs-comment"># Generates and opens in one step
<span class="hljs-comment"># Serve with Python (for sharing on network)
<span class="hljs-built_in">cd allure-report && python3 -m http.server 8080Common Pitfalls
Empty report. allure-results/ must contain result files from your test run. If the adapter isn't installed or configured, the directory stays empty or missing. Verify your reporter config produces files there.
Java version mismatch. Allure CLI requires Java 8+. On macOS, check with java -version. The Homebrew install pulls the right version automatically.
Report not updating in CI. CI typically doesn't persist allure-results/ between runs unless you configure artifact storage. Each run starts fresh — trend history requires explicitly storing and restoring the history directory.
Steps without assertions. Steps in Allure describe what happened, but the actual test assertion must still exist. A test with beautiful steps but no expect() or assert is theater, not testing.
Beyond Reports: Production Test Monitoring
Allure shows you what happened in your test suite. But what about production behavior between releases?
HelpMeTest continuously runs tests against your live application and provides its own session replay and failure logs for every broken test run. Where Allure captures your CI test history, HelpMeTest captures what's actually failing in production right now — giving your team early warning before users file support tickets.
Summary
Allure Report setup:
- Install the Allure CLI (
brew install allureornpm i -g allure-commandline) - Install the adapter for your framework
- Run tests →
allure-results/populated allure generate allure-results --clean→ HTML report- Add steps, labels, and attachments to enrich failure detail
- In CI, publish the report as an artifact or to GitHub Pages
The investment in setting up Allure pays off quickly when debugging failed CI runs — instead of reading stack traces, you see a visual step-by-step breakdown of exactly what the test did before it failed.