Test Reporting in CI: Aggregating JUnit XML Across Jest, pytest, Go test, and Rust
JUnit XML is the lingua franca of test reporting. Originally a JUnit (Java) format, it's been adopted by every major test framework across every language. GitHub Actions, GitLab CI, Jenkins, CircleCI, and most other CI platforms can parse and display JUnit XML natively. Understanding how to generate and aggregate it across a polyglot codebase is a fundamental CI skill.
The JUnit XML Format
JUnit XML is simple:
<?xml version="1.0" encoding="UTF-8"?>
<testsuites>
<testsuite name="LoginTests" tests="3" failures="1" errors="0" skipped="0" time="1.234">
<testcase name="successful login" classname="auth.LoginTests" time="0.456">
<!-- No child element = passed -->
</testcase>
<testcase name="login fails with wrong password" classname="auth.LoginTests" time="0.234">
<!-- Failure with message and stack trace -->
<failure message="AssertionError: Expected 'dashboard' but got 'login'">
AssertionError: Expected 'dashboard' but got 'login'
at login_fails_with_wrong_password (tests/login.test.js:23:5)
</failure>
</testcase>
<testcase name="login page loads" classname="auth.LoginTests" time="0.544">
<skipped message="Skipped in staging environment"/>
</testcase>
</testsuite>
</testsuites>Key attributes:
tests: total test countfailures: assertion failureserrors: test errors (exceptions, crashes)skipped: skipped teststime: duration in seconds
Generating JUnit XML by Framework
Jest (JavaScript)
npm install --save-dev jest-junit// jest.config.js
module.exports = {
reporters: [
'default',
['jest-junit', {
outputDirectory: 'reports',
outputName: 'jest-junit.xml',
classNameTemplate: '{classname}',
titleTemplate: '{title}',
ancestorSeparator: ' > ',
usePathForSuiteName: true,
}]
]
};# Or via CLI
JEST_JUNIT_OUTPUT_DIR=reports npx jest --reporters=jest-junit
<span class="hljs-comment"># Verify output
<span class="hljs-built_in">cat reports/jest-junit.xmlpytest (Python)
pytest has built-in JUnit XML support:
# Generate JUnit XML
pytest tests/ --junit-xml=reports/pytest-junit.xml
<span class="hljs-comment"># With verbose output
pytest tests/ --junit-xml=reports/pytest-junit.xml -v
<span class="hljs-comment"># With properties (extra metadata)
pytest tests/ \
--junit-xml=reports/pytest-junit.xml \
--junit-prefix=myapp# pytest.ini
[pytest]
junit_logging = all
junit_log_passing_tests = true
junit_duration_report = totalCustom properties per test:
# conftest.py
def pytest_runtest_makereport(item, call):
if call.when == "call":
item._report_sections = getattr(item, '_report_sections', [])
# test_example.py
def test_api_response(record_property):
response = make_api_call()
record_property("status_code", response.status_code)
record_property("response_time_ms", response.elapsed.microseconds // 1000)
assert response.status_code == 200Go
Go's testing package generates JUnit XML via go-junit-report:
# Install
go install github.com/jstemmer/go-junit-report/v2@latest
<span class="hljs-comment"># Run tests and convert output
go <span class="hljs-built_in">test ./... -v -count=1 2>&1 <span class="hljs-pipe">| go-junit-report -set-exit-code > reports/go-junit.xml
<span class="hljs-comment"># With timing
go <span class="hljs-built_in">test ./... -v -count=1 -<span class="hljs-built_in">timeout 300s 2>&1 <span class="hljs-pipe">| go-junit-report > reports/go-junit.xmlFor gotestsum (recommended for clean output):
go install gotest.tools/gotestsum@latest
# Generate JUnit XML directly
gotestsum --junitfile reports/go-junit.xml -- ./...
<span class="hljs-comment"># With package-level output
gotestsum --junitfile reports/go-junit.xml --junitfile-testcase-classname short ./...Rust
# Install cargo-junit
cargo install cargo-junit
<span class="hljs-comment"># Or use cargo-nextest (recommended)
cargo install cargo-nextest
<span class="hljs-comment"># cargo-nextest generates JUnit XML natively
cargo nextest run --profile ci
<span class="hljs-comment"># Default output location
<span class="hljs-built_in">ls target/nextest/ci/junit.xmlCustom nextest config:
# .config/nextest.toml
[profile.ci]
failure-output = "immediate-final"
success-output = "never"
status-level = "fail"
[[profile.ci.junit]]
path = "reports/nextest-junit.xml"Ruby (RSpec)
# Gemfile
gem 'rspec_junit_formatter'# .rspec
--format RspecJunitFormatter
--out reports/rspec-junit.xml
--format progress
<span class="hljs-comment"># Or CLI
rspec --format RspecJunitFormatter --out reports/rspec-junit.xml.NET (xUnit, NUnit, MSTest)
# xUnit with dotnet test
dotnet <span class="hljs-built_in">test --logger <span class="hljs-string">"junit;LogFilePath=reports/xunit-junit.xml"
<span class="hljs-comment"># NUnit
dotnet <span class="hljs-built_in">test --logger <span class="hljs-string">"nunit;LogFileName=reports/nunit-result.xml"
<span class="hljs-comment"># MSTest
dotnet <span class="hljs-built_in">test --logger <span class="hljs-string">"trx;LogFileName=reports/mstest.trx"
<span class="hljs-comment"># Convert TRX to JUnit XML if neededPHP (PHPUnit)
<!-- phpunit.xml -->
<phpunit>
<logging>
<junit outputFile="reports/phpunit-junit.xml"/>
</logging>
</phpunit>phpunit --log-junit reports/phpunit-junit.xmlAggregating Multiple Reports
When multiple test suites generate separate XML files, merge them before presenting to CI:
Simple Merge (Cat XML)
JUnit XML supports multiple <testsuite> elements under <testsuites>. You can concatenate them:
#!/usr/bin/env python3
"""Merge multiple JUnit XML files into one"""
import xml.etree.ElementTree as ET
import glob
import sys
output_file = sys.argv[1]
input_pattern = sys.argv[2]
merged = ET.Element('testsuites')
totals = {'tests': 0, 'failures': 0, 'errors': 0, 'skipped': 0, 'time': 0.0}
for xml_file in glob.glob(input_pattern):
tree = ET.parse(xml_file)
root = tree.getroot()
# Handle both <testsuites> and <testsuite> roots
if root.tag == 'testsuites':
suites = root.findall('testsuite')
else:
suites = [root]
for suite in suites:
merged.append(suite)
totals['tests'] += int(suite.get('tests', 0))
totals['failures'] += int(suite.get('failures', 0))
totals['errors'] += int(suite.get('errors', 0))
totals['skipped'] += int(suite.get('skipped', 0))
totals['time'] += float(suite.get('time', 0))
# Set totals on root element
for key, value in totals.items():
merged.set(key, str(value))
ET.ElementTree(merged).write(output_file, xml_declaration=True, encoding='utf-8')
print(f"Merged {len(list(merged))} test suites → {output_file}")
print(f"Total: {totals['tests']} tests, {totals['failures']} failures, {totals['errors']} errors")python3 merge-junit.py reports/merged.xml "reports/*.xml"junit-xml-merge (npm)
npm install -g junit-xml-merge
# Merge all XML files in reports/
junit-xml-merge --output reports/merged.xml reports/*.xmlGitHub Actions Integration
GitHub Actions natively parses JUnit XML from test steps and shows results inline:
name: Test
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
# JavaScript
- name: Run Jest
run: npm test
env:
JEST_JUNIT_OUTPUT_DIR: reports
# Python
- name: Run pytest
run: pytest tests/ --junit-xml=reports/pytest-junit.xml
# Go
- name: Run Go tests
run: |
go install gotest.tools/gotestsum@latest
gotestsum --junitfile reports/go-junit.xml -- ./...
# Publish test results
- name: Publish Test Results
uses: EnricoMi/publish-unit-test-result-action@v2
if: always()
with:
files: reports/**/*.xml
check_name: Test Results
comment_on_pr: true
# Also upload as artifact
- name: Upload test reports
uses: actions/upload-artifact@v4
if: always()
with:
name: test-reports
path: reports/The publish-unit-test-result-action adds a check to the PR showing test counts, failure details, and trend data.
Alternative: dorny/test-reporter
- name: Test Reporter
uses: dorny/test-reporter@v1
if: always()
with:
name: Tests
path: reports/**/*.xml
reporter: java-junit
fail-on-error: trueGitLab CI Integration
GitLab CI has native JUnit XML parsing:
test:
script:
- pytest tests/ --junit-xml=reports/junit.xml
- npm test -- --reporters=jest-junit
- gotestsum --junitfile reports/go-junit.xml -- ./...
artifacts:
when: always
reports:
junit:
- reports/junit.xml
- reports/jest-junit.xml
- reports/go-junit.xml
paths:
- reports/
expire_in: 30 daysGitLab displays test results in the merge request widget with failure details and history.
Jenkins Integration
// Jenkinsfile
pipeline {
agent any
stages {
stage('Test') {
steps {
sh 'pytest tests/ --junit-xml=reports/pytest-junit.xml'
sh 'npm test'
sh 'gotestsum --junitfile reports/go-junit.xml -- ./...'
}
post {
always {
junit 'reports/**/*.xml'
archiveArtifacts artifacts: 'reports/**', allowEmptyArchive: true
}
}
}
}
}Polyglot CI Pipeline Example
Real services often have multiple languages. This example handles a Node.js frontend, Python backend, and Go service:
name: Polyglot Test Suite
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Setup Python
uses: actions/setup-python@v5
with:
python-version: '3.12'
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: '1.22'
# Install all dependencies
- run: npm ci
- run: pip install -r requirements.txt pytest pytest-junit
- run: go install gotest.tools/gotestsum@latest
# Run all test suites
- name: Run frontend tests (Jest)
run: npm test
env:
JEST_JUNIT_OUTPUT_DIR: test-results
JEST_JUNIT_OUTPUT_NAME: frontend.xml
- name: Run backend tests (pytest)
run: pytest backend/tests/ --junit-xml=test-results/backend.xml
- name: Run service tests (Go)
run: gotestsum --junitfile test-results/service.xml -- ./service/...
# Publish combined results
- name: Publish all test results
uses: EnricoMi/publish-unit-test-result-action@v2
if: always()
with:
files: test-results/**/*.xml
check_name: All Tests
comment_mode: create new
compare_to_earlier_commit: trueAnalyzing JUnit XML Programmatically
#!/usr/bin/env python3
"""Parse and summarize JUnit XML results"""
import xml.etree.ElementTree as ET
import sys
from pathlib import Path
from collections import defaultdict
def parse_junit(file_path):
tree = ET.parse(file_path)
root = tree.getroot()
suites = root.findall('.//testsuite')
results = {
'total': 0, 'passed': 0, 'failed': 0,
'errors': 0, 'skipped': 0, 'time': 0.0,
'failures': []
}
for suite in suites:
results['total'] += int(suite.get('tests', 0))
results['failed'] += int(suite.get('failures', 0))
results['errors'] += int(suite.get('errors', 0))
results['skipped'] += int(suite.get('skipped', 0))
results['time'] += float(suite.get('time', 0))
for case in suite.findall('testcase'):
failure = case.find('failure')
error = case.find('error')
if failure is not None or error is not None:
results['failures'].append({
'name': case.get('name'),
'classname': case.get('classname'),
'message': (failure or error).get('message', ''),
'duration': float(case.get('time', 0))
})
results['passed'] = results['total'] - results['failed'] - results['errors'] - results['skipped']
return results
if __name__ == '__main__':
all_results = defaultdict(int)
for xml_file in Path('test-results').glob('*.xml'):
results = parse_junit(xml_file)
suite_name = xml_file.stem
print(f"\n{suite_name}:")
print(f" Tests: {results['total']} | Pass: {results['passed']} | "
f"Fail: {results['failed']} | Skip: {results['skipped']} | "
f"Time: {results['time']:.1f}s")
for key in ('total', 'passed', 'failed', 'skipped'):
all_results[key] += results[key]
if results['failures']:
print(" Failures:")
for f in results['failures'][:3]: # Show first 3
print(f" ✗ {f['classname']}.{f['name']}")
print(f" {f['message'][:100]}")
print(f"\nTotal: {all_results['total']} tests | "
f"{all_results['passed']} passed | "
f"{all_results['failed']} failed | "
f"{all_results['skipped']} skipped")
sys.exit(1 if all_results['failed'] > 0 else 0)JUnit XML's ubiquity makes it the best choice for CI test reporting in polyglot environments. Generate it from every test framework, aggregate it in CI, and let your CI platform handle the display. The overhead is minimal and the visibility gains are significant.