ReportPortal: Self-Hosted Test Reporting for Large Teams
At a certain scale, spreadsheets and CI artifact tabs break down as test reporting solutions. When dozens of QA engineers run thousands of tests daily across multiple projects, you need a dedicated test reporting platform. ReportPortal is the open-source answer to that problem.
What Is ReportPortal?
ReportPortal is a self-hosted test reporting and analytics platform. It aggregates test results from multiple test frameworks and CI runs into a unified dashboard, with AI-powered defect categorization, trend analysis, and team collaboration features.
Key differentiators:
- Self-hosted: Your test data stays on your infrastructure
- AI analysis: Machine learning for automatic defect categorization and pattern detection
- Framework-agnostic: Agents for JUnit, pytest, TestNG, Robot Framework, Cucumber, and more
- Historical trends: Track test stability and failure patterns over time
- Team features: Defect assignment, comments, flaky test detection
Architecture
ReportPortal consists of several services:
- API Service: REST API, test result storage
- UI: React-based dashboard
- Analyzer: AI/ML service for defect analysis
- Message Broker: RabbitMQ for async processing
- Database: PostgreSQL for structured data
- Object Storage: MinIO (or S3) for binary artifacts (screenshots, logs)
Deployment with Docker Compose
The recommended way to run ReportPortal locally or on a small server:
# docker-compose.yaml
version: '3.8'
services:
gateway:
image: traefik:v2.11
ports:
- "8080:8080"
- "8081:8081"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./traefik.yaml:/etc/traefik/traefik.yaml:ro
restart: always
uat:
image: reportportal/service-authorization:5.11.0
environment:
- RP_DB_HOST=postgres
- RP_DB_USER=rpuser
- RP_DB_PASS=rppass
- RP_DB_NAME=reportportal
depends_on:
- postgres
api:
image: reportportal/service-api:5.11.0
environment:
- RP_DB_HOST=postgres
- RP_DB_USER=rpuser
- RP_DB_PASS=rppass
- RP_DB_NAME=reportportal
- DATASTORE_TYPE=filesystem
volumes:
- ./data/storage:/data/storage
depends_on:
- postgres
- rabbitmq
ui:
image: reportportal/service-ui:5.11.0
environment:
- RP_SERVER_PORT=8080
index:
image: reportportal/service-index:5.0.10
environment:
- RP_SERVER_PORT=8080
analyzer:
image: reportportal/service-auto-analyzer:5.11.0
volumes:
- ./data/elasticsearch:/usr/share/elasticsearch/data
environment:
- LOGGING_LEVEL=info
- AMQP_URL=amqp://rabbitmq
postgres:
image: postgres:16
environment:
POSTGRES_USER: rpuser
POSTGRES_PASSWORD: rppass
POSTGRES_DB: reportportal
volumes:
- ./data/postgres:/var/lib/postgresql/data
rabbitmq:
image: rabbitmq:3.13-management
environment:
RABBITMQ_DEFAULT_USER: rabbitmq
RABBITMQ_DEFAULT_PASS: rabbitmq# Start ReportPortal
docker compose up -d
<span class="hljs-comment"># Access at http://localhost:8080
<span class="hljs-comment"># Default credentials: superadmin / erebusKubernetes Deployment
For production, use the official Helm chart:
# Add the ReportPortal Helm repo
helm repo add reportportal https://reportportal.io/kubernetes
helm repo update
<span class="hljs-comment"># Install with custom values
helm install reportportal reportportal/reportportal \
--namespace reportportal \
--create-namespace \
-f values.yaml# values.yaml
ingress:
enabled: true
hosts:
- reportportal.example.com
postgresql:
auth:
postgresPassword: "changeme"
database: "reportportal"
rabbitmq:
auth:
username: "rabbitmq"
password: "changeme"
serviceapi:
resources:
requests:
memory: "1Gi"
limits:
memory: "2Gi"Initial Setup
- Log in with
superadmin/erebus - Create your first project: Administration → Projects → Add Project
- Create a regular user: Administration → Users → Add User
- Generate an API token: User → Profile → API Keys
Integrating Test Frameworks
Python (pytest)
pip install pytest-reportportal# pytest.ini (or pyproject.toml)
[pytest]
rp_endpoint = http://localhost:8080
rp_uuid = your-api-token-here
rp_launch = My Test Suite
rp_project = my_project
rp_launch_description = Test run from CI# Run with ReportPortal reporting
pytest tests/ \
--reportportal \
-o <span class="hljs-string">"rp_endpoint=http://localhost:8080" \
-o <span class="hljs-string">"rp_uuid=my-api-token" \
-o <span class="hljs-string">"rp_project=my_project" \
-o <span class="hljs-string">"rp_launch=Smoke Tests"Java (JUnit 5 / TestNG)
<!-- Maven: pom.xml -->
<dependency>
<groupId>com.epam.reportportal</groupId>
<artifactId>agent-java-junit5</artifactId>
<version>5.2.5</version>
<scope>test</scope>
</dependency># src/test/resources/reportportal.properties
rp.endpoint=http://localhost:8080
rp.uuid=your-api-token
rp.project=my_project
rp.launch=My Launch
rp.launch.doc=Test launch from CI// Add to JUnit 5 tests
@ExtendWith(ReportPortalExtension.class)
public class MyTest {
@Test
void myTest() {
// Test code
}
}JavaScript (Mocha/Jest/Playwright)
# Mocha
npm install @reportportal/agent-js-mocha
<span class="hljs-comment"># Jest
npm install @reportportal/agent-js-jest
<span class="hljs-comment"># Playwright
npm install @reportportal/agent-js-playwright// jest.config.js
module.exports = {
reporters: [
'default',
['@reportportal/agent-js-jest', {
apiKey: 'your-api-token',
endpoint: 'http://localhost:8080/api/v1',
project: 'my_project',
launch: 'My Launch',
description: 'Tests from PR #123',
}]
]
};Robot Framework
pip install robotframework-reportportal# Run Robot Framework tests with RP reporting
robot \
--listener reportportal_listener \
--variable RP_ENDPOINT:http://localhost:8080 \
--variable RP_UUID:your-api-token \
--variable RP_PROJECT:my_project \
--variable RP_LAUNCH:Regression \
tests/Cucumber
<!-- pom.xml -->
<dependency>
<groupId>com.epam.reportportal</groupId>
<artifactId>agent-java-cucumber6</artifactId>
<version>5.2.1</version>
<scope>test</scope>
</dependency>@CucumberOptions(plugin = {
"com.epam.reportportal.cucumber.ScenarioReporter"
})
public class TestRunner {}Understanding the Dashboard
Launches
A launch is a single test execution. Each time you run tests, a new launch is created. Launches contain:
- Test suites → Test cases → Steps
- Logs and attachments
- Statistics (passed/failed/skipped)
- Duration and trends
Defect Types
ReportPortal categorizes failures into:
- Product Bug (PB): Genuine software defect
- Automation Bug (AB): Test code issue, not product
- System Issue (SI): Environment/infrastructure problem
- To Investigate (TI): Not yet categorized
The AI analyzer suggests defect types automatically based on failure patterns learned from past investigations.
Filters and Widgets
Create dashboards with widgets:
- Overall Statistics: Pass/fail pie chart per launch
- Launch Statistics: Trends over time
- Flaky Tests: Tests with inconsistent pass/fail patterns
- Most Failed Tests: Identify the most problematic tests
- Test Cases Growth: Track test suite evolution
- Unique Bugs: Discover recurring failures
Dashboard creation:
1. Launches → Add Filter (filter by project, launch name, tags)
2. Dashboard → Add New Dashboard
3. Add Widget → Choose type → ConfigureCI/CD Integration
GitHub Actions
- name: Run tests with ReportPortal
env:
RP_ENDPOINT: ${{ secrets.RP_ENDPOINT }}
RP_UUID: ${{ secrets.RP_TOKEN }}
RP_PROJECT: my_project
RP_LAUNCH: "PR-${{ github.event.number }}"
RP_LAUNCH_DESCRIPTION: "Branch: ${{ github.head_ref }}"
RP_ATTRIBUTES: "build:${{ github.sha }};branch:${{ github.head_ref }}"
run: pytest tests/ --reportportalTagging Launches
Tags help filter and group launches:
# pytest.ini
rp_attributes = component:auth;type:regression;env:staging// jest.config.js
{
attributes: [
{ key: 'component', value: 'checkout' },
{ key: 'type', value: 'smoke' },
{ key: 'env', value: process.env.ENVIRONMENT }
]
}AI Defect Analysis
The Auto-Analyzer service detects similar failures across test runs:
- Manual investigation: Investigate a failure and mark it as AB/PB/SI
- Training: The analyzer learns from your categorizations
- Auto-analysis: Future similar failures are automatically categorized
- Pattern matching: Based on error messages, stack traces, test names
To enable:
Project Settings → Auto-Analysis → EnableConfigure thresholds:
- Minimum should match: How many tokens must match (70% recommended)
- Minimum count in log: Minimum occurrences to consider a pattern
Notifications
Set up email/Slack notifications for test runs:
Project Settings → Notifications → Add Rule
Rule configuration:
- Launch name: "Regression*"
- Attributes: "env:production"
- Recipients: qa-team@example.com
- Notify on: Failed tests, Failed launchesSlack integration via webhooks:
Project Settings → Integrations → Slack
Webhook URL: https://hooks.slack.com/...
Channel: #qa-alertsRetention and Storage
ReportPortal accumulates data fast. Configure retention:
Administration → Server Settings → Cleanup Settings
Keep launches for: 90 days
Keep logs for: 60 days
Keep screenshots for: 30 daysPerformance at Scale
For large teams (100+ tests/day), tune:
# docker-compose.yaml adjustments
serviceapi:
environment:
- RP_ENVIRONMENT_VARIABLE_EXECUTOR_POOL_SIZE=20
- RP_ENVIRONMENT_VARIABLE_ENVIRONMENT_TYPE=kubernetes
resources:
limits:
memory: 4Gi
requests:
memory: 2Gi
postgres:
environment:
POSTGRES_MAX_CONNECTIONS: 200ReportPortal is a significant infrastructure investment, but for large QA teams it pays off in reduced investigation time, better visibility into test stability, and historical trend data that's impossible to maintain manually.