ReportPortal: AI-Powered Test Analytics for Large QA Teams
As test automation grows from dozens to thousands of test cases, understanding why tests are failing becomes harder than running them. ReportPortal addresses this directly: it's an open-source test reporting and analytics platform built for teams that run large-scale automation.
Unlike simple JUnit report viewers, ReportPortal applies machine learning to failure logs, groups similar failures, and helps teams distinguish real bugs from environment noise — at scale.
What Is ReportPortal?
ReportPortal is an open-source test observability platform, originally developed by EPAM Systems and now maintained by the ReportPortal Foundation. It aggregates test results from any automation framework, stores historical data, and applies AI-based log analysis to categorize failures.
The core insight behind ReportPortal: when you're running 5,000 tests across multiple teams, projects, and environments, you need a dedicated system for understanding the data — not just displaying it.
Key capabilities:
- Centralized test result aggregation across projects and pipelines
- AI Defect Triage — automatically groups failures by similar log patterns
- Launch-based history — compare test results across builds over time
- Flakiness detection — identify tests that fail intermittently
- Integration with all major test frameworks via agents/adapters
- REST API for custom reporting and dashboards
- Self-hosted or SaaS deployment options
Architecture Overview
ReportPortal consists of several services:
- UI service — React frontend
- API service — REST endpoints for receiving and querying results
- Analyzer service — Python ML service for log analysis and defect grouping
- Elasticsearch — stores and indexes failure logs for ML analysis
- PostgreSQL — stores test metadata, launches, and defects
- RabbitMQ — message queue for async event processing
- MinIO — object storage for screenshots and attachments
For production deployments, ReportPortal provides Kubernetes Helm charts and Docker Compose configurations.
Installation
Docker Compose (Quickstart)
curl -LO https://raw.githubusercontent.com/reportportal/reportportal/master/docker-compose.yml
docker-compose -p reportportal up -dAccess the UI at http://localhost:8080. Default credentials: superadmin / erebus.
Kubernetes (Production)
helm repo add reportportal https://reportportal.github.io/kubernetes
helm repo update
helm install my-release reportportal/reportportal \
--set postgresql.auth.password=yourpassword \
--<span class="hljs-built_in">set global.imageRegistry=docker.ioFor production, configure persistent volumes for PostgreSQL, Elasticsearch, and MinIO.
Connecting Your Test Framework
ReportPortal provides agents (client libraries) for all major frameworks:
| Framework | Language | Agent |
|---|---|---|
| JUnit 5 | Java | reportportal-client-java |
| TestNG | Java | agent-java-testng |
| pytest | Python | pytest-reportportal |
| Robot Framework | Python | robotframework-reportportal |
| Cucumber | Java/JS | agent-java-cucumber |
| Jest | JavaScript | agent-js-jest |
| Playwright | JavaScript | agent-js-playwright |
| Cypress | JavaScript | agent-js-cypress |
Python / pytest Setup
pip install pytest-reportportalIn pytest.ini or setup.cfg:
[pytest]
rp_uuid = your-token-here
rp_endpoint = http://your-reportportal-host:8080
rp_project = your_project_name
rp_launch = My Launch Name
rp_launch_attributes = build:1.0 env:stagingRun tests normally:
pytest --reportportal tests/Robot Framework Setup
pip install robotframework-reportportalAdd to your Robot Framework run command:
robot \
--listener reportportal_listener \
--variable RP_UUID:your-token \
--variable RP_ENDPOINT:http://your-host:8080 \
--variable RP_PROJECT:your_project \
--variable RP_LAUNCH:suite_name \
tests/Sending Results from CI
Most teams send results during CI runs. Example with GitHub Actions:
- name: Run tests and report to ReportPortal
env:
RP_UUID: ${{ secrets.REPORTPORTAL_TOKEN }}
RP_ENDPOINT: ${{ secrets.REPORTPORTAL_URL }}
RP_PROJECT: my-project
run: pytest --reportportal tests/Understanding Launches
In ReportPortal, a Launch is the top-level entity — it represents one test run (one CI pipeline execution, one scheduled test run, etc.). Each launch contains:
- Test suites → test cases → logs and attachments
- Overall statistics: passed, failed, skipped
- Defect type breakdown: product bug, automation issue, system issue, no defect
Launches are comparable over time. You can see a burn-down chart of your failure rate across the last N launches — useful for tracking whether a flaky period is improving or getting worse.
AI Defect Triage
This is ReportPortal's differentiating feature. When a test fails, ReportPortal's Analyzer service:
- Extracts the failure log and stack trace
- Compares it against previously categorized failures using ML
- Suggests a defect category if the failure pattern matches past issues
- Groups failures with the same root cause across the entire launch
Defect categories:
- Product Bug (PB) — real application defect
- Automation Issue (AI) — test script is broken or has a bad selector
- System Issue (SI) — environment, network, database, or infrastructure problem
- No Defect (ND) — expected failure or known skip
- To Investigate (TI) — needs triage (default for new failures)
Once you categorize failures in a few launches, the Analyzer starts suggesting categories for new failures automatically. Over time, triage time drops significantly — from hours per launch to minutes.
Manual Triage Workflow
- Open a launch
- Filter by "To Investigate"
- Click a failed test → view logs
- Select defect type and link to Jira issue (optional)
- Click "Mark as Defect"
- The Analyzer learns from this assignment
Pattern Analysis
ReportPortal's Pattern Analysis feature lets you define regex or string patterns that match specific failure messages. When a test log matches a pattern, ReportPortal automatically assigns that failure to a predefined defect type.
Example pattern:
- Name:
ConnectionTimeout - Pattern:
.*Connection timed out.* - Defect type: System Issue
This rule will auto-categorize any failure containing "Connection timed out" — no manual triage needed.
Flakiness Detection
ReportPortal tracks test results across launches and surfaces tests that flip between pass and fail frequently. The flakiness score is calculated from the last N launches for each test.
In the Test Items view, filter by Flaky tests to see which tests are unstable. Sort by failure rate over the last 10 or 30 launches to find the worst offenders.
This is especially valuable for distinguishing between:
- A test that has been consistently failing (new regression)
- A test that fails ~30% of the time (flaky test — timing or environment-sensitive)
Dashboards and Widgets
ReportPortal's dashboard system lets you build custom views from pre-built widgets:
- Launch statistics trend — pass/fail rate across launches over time
- Overall statistics — donut chart of test status breakdown
- Test cases growth — total test count trend
- Most failed test cases — sorted by failure frequency
- Flaky test cases — ranked by instability score
- Investigated percentage — ratio of triaged vs. un-triaged failures
- Unique bugs — defects linked per launch
Dashboards are shareable by URL and can be embedded in your internal tooling.
Integrations
Jira Integration
ReportPortal can create Jira issues directly from a failed test:
- Configure Jira credentials in ReportPortal → Project Settings → Integrations
- Open a failed test item
- Click "Link Defect" → "Post Bug to BTS"
- ReportPortal pre-fills the Jira issue with test name, logs, and a link back to the report
Slack Notifications
ReportPortal supports Slack notifications via configured rules:
- Alert on launch finish (always or on failure only)
- Alert when failure rate exceeds a threshold
- Include links to the specific launch
Configure at: Project Settings → Notifications → Add Rule → Slack
ReportPortal vs. Simple Report Viewers
| Feature | ReportPortal | Allure Report | Simple JUnit HTML |
|---|---|---|---|
| Historical trends | ✓ | Limited | ✗ |
| AI defect grouping | ✓ | ✗ | ✗ |
| Multi-project aggregation | ✓ | ✗ | ✗ |
| Flakiness scoring | ✓ | ✗ | ✗ |
| Jira integration | ✓ | Limited | ✗ |
| Self-hosted | ✓ | ✓ | ✓ |
| Setup complexity | High | Low | None |
ReportPortal is overkill for small test suites. It pays for itself operationally when you have 500+ automated tests running daily across multiple teams.
When to Use ReportPortal
Good fit:
- 500+ automated tests across multiple projects or teams
- Multiple CI pipelines contributing to the same test results repository
- QA team spends significant time triaging failures each sprint
- Need to demonstrate test coverage to stakeholders with dashboards
Not a good fit:
- Small teams with fewer than 100 tests
- Teams that don't have ops capacity to maintain the infrastructure
- Projects where simple HTML reports or Allure Report cover the need
ReportPortal and HelpMeTest
HelpMeTest solves a different layer of the problem: it runs tests automatically on a schedule, handles test infrastructure, and monitors your application 24/7 — without requiring you to maintain a separate reporting stack.
For teams scaling past HelpMeTest's built-in reporting, ReportPortal can receive test results via its JUnit agent alongside HelpMeTest's own result storage. The two aren't mutually exclusive — HelpMeTest runs the tests, ReportPortal aggregates the historical analytics.
Summary
ReportPortal is a serious investment: it requires infrastructure, setup time, and ongoing maintenance. But for QA organizations running large-scale automation, the AI-powered defect triage and cross-project analytics are capabilities that simple report viewers can't match.
Start with the Docker Compose quickstart, connect one project, and run 10 launches. If the triage automation and historical trends immediately surface insights you were missing, it's worth the operational overhead. If your test suite is still growing, revisit when you hit the 500-test threshold.