OpenXR Testing Strategies for Cross-Platform VR
OpenXR's promise is write-once, run-anywhere VR — but runtimes differ in subtle ways that break apps silently. This guide covers the OpenXR Conformance Test Suite, mock runtime testing, action binding validation, cross-platform extension testing, and automated frame timing checks to catch runtime-specific regressions before users do.
OpenXR standardized the native VR/XR API layer, but conformance is not uniformity. Meta's runtime on Quest, Valve's OpenXR runtime on SteamVR, and Windows Mixed Reality each interpret the spec differently in edge cases — session state transitions, action set activation timing, swapchain teardown order. A test strategy that only runs on one runtime gives you false confidence.
This guide assumes you are writing a native OpenXR application in C++ or using a framework (Unreal, Godot, custom engine) that exposes the raw OpenXR API. The same principles apply if you are using the OpenXR C++ wrapper (openxr.hpp) or a higher-level binding.
The OpenXR Conformance Test Suite
The OpenXR CTS is the authoritative test suite maintained by Khronos. It validates runtime conformance, not application logic — but it is essential for two reasons:
- If you are writing a custom runtime or runtime layer, you must pass the CTS.
- Running the CTS against all your target runtimes tells you which runtime behaviors you can actually rely on.
Clone and build:
git clone https://github.com/KhronosGroup/OpenXR-CTS.git
<span class="hljs-built_in">cd OpenXR-CTS
cmake -B build -DCMAKE_BUILD_TYPE=Release \
-DBUILD_CONFORMANCE_CLI=ON \
-DBUILD_CONFORMANCE_TESTS=ON
cmake --build build --config Release --parallel
<span class="hljs-comment"># Run against the active runtime
./build/src/conformance/conformance_cli/conformance_cli \
--reporter console \
--test-all \
2>&1 <span class="hljs-pipe">| <span class="hljs-built_in">tee cts-results-$(<span class="hljs-built_in">date +%Y%m%d).txtRun this against each target runtime and diff the outputs. Any test that passes on Meta but fails on SteamVR is a runtime divergence you need to code around.
For CI, you can run the CTS against the Monado open-source runtime — the only runtime that runs headlessly on Linux:
# Install Monado
apt-get install -y monado
<span class="hljs-comment"># Set the active runtime
<span class="hljs-built_in">export XR_RUNTIME_JSON=/usr/share/openxr/1/openxr_monado.json
<span class="hljs-comment"># Run a subset of CTS tests headlessly
./conformance_cli --reporter console \
--<span class="hljs-built_in">test <span class="hljs-string">"[composition]" \
--<span class="hljs-built_in">test <span class="hljs-string">"[actions]"Mock Runtime Testing for Application Logic
For testing your application's OpenXR calls without hardware, use a mock runtime that records calls and returns scripted responses. The Khronos openxr-simple-layertest project and the mock_icd from the Vulkan ecosystem offer patterns, but for XR application testing the cleanest approach is a thin API shim via the OpenXR API layer mechanism.
Create an XrMockRuntime class that wraps function pointers:
// test/mock_runtime.h
#pragma once
#include <openxr/openxr.h>
#include <functional>
#include <vector>
struct FrameState {
XrTime predictedDisplayTime;
XrDuration predictedDisplayPeriod;
XrBool32 shouldRender;
};
class XrMockRuntime {
public:
// Scripted responses
std::vector<FrameState> frameQueue;
std::vector<XrEventDataBuffer> eventQueue;
std::vector<XrActionStateBoolean> booleanActionStates;
// Call counts for verification
int waitFrameCallCount = 0;
int beginFrameCallCount = 0;
int endFrameCallCount = 0;
int syncActionsCallCount = 0;
XrResult WaitFrame(XrSession session,
const XrFrameWaitInfo* frameWaitInfo,
XrFrameState* frameState)
{
++waitFrameCallCount;
if (frameQueue.empty()) return XR_ERROR_RUNTIME_FAILURE;
auto& f = frameQueue.front();
frameState->predictedDisplayTime = f.predictedDisplayTime;
frameState->predictedDisplayPeriod = f.predictedDisplayPeriod;
frameState->shouldRender = f.shouldRender;
frameQueue.erase(frameQueue.begin());
return XR_SUCCESS;
}
XrResult PollEvent(XrInstance instance, XrEventDataBuffer* eventData)
{
if (eventQueue.empty()) return XR_EVENT_UNAVAILABLE;
*eventData = eventQueue.front();
eventQueue.erase(eventQueue.begin());
return XR_SUCCESS;
}
};Inject the mock via a compile-time seam or a dependency injection pattern in your application's XR abstraction layer.
Testing Action Bindings Across Controllers
Controller layouts differ significantly. The Meta Touch Pro controller has a thumbrest proximity sensor; SteamVR controllers have squeeze axes; Windows MR controllers lack a thumbstick on some models. Action binding tests verify that your interaction profile paths resolve correctly.
// test/action_binding_tests.cpp
#include <gtest/gtest.h>
#include "xr_app.h"
#include "mock_runtime.h"
class ActionBindingTest : public ::testing::Test {
protected:
XrMockRuntime mock;
XrApp* app;
void SetUp() override {
app = new XrApp(&mock);
}
void TearDown() override {
delete app;
}
};
TEST_F(ActionBindingTest, SelectActionBinds_OculusTouch) {
const char* interactionProfile =
"/interaction_profiles/oculus/touch_controller";
auto result = app->SuggestBindings(interactionProfile, {
{ app->selectAction, "/user/hand/right/input/trigger/value" },
});
EXPECT_EQ(result, XR_SUCCESS);
EXPECT_TRUE(app->HasBindingForProfile(interactionProfile));
}
TEST_F(ActionBindingTest, SelectActionBinds_ValveIndex) {
const char* interactionProfile =
"/interaction_profiles/valve/index_controller";
// Index controller uses trigger/value too, but also supports squeeze
auto result = app->SuggestBindings(interactionProfile, {
{ app->selectAction, "/user/hand/right/input/trigger/value" },
{ app->gripAction, "/user/hand/right/input/squeeze/value" },
});
EXPECT_EQ(result, XR_SUCCESS);
}
TEST_F(ActionBindingTest, NoBindingForProfile_ReturnsError) {
// App should surface an error if no binding exists for active profile
mock.SetActiveInteractionProfile(
"/user/hand/right",
"/interaction_profiles/microsoft/motion_controller"
);
app->AttachActionSets();
XrActionStatePose poseState{XR_TYPE_ACTION_STATE_POSE};
auto result = app->GetActionStatePose(app->aimAction, XR_HAND_RIGHT_EXT, &poseState);
// Should not crash; should return XR_SUCCESS with isActive=XR_FALSE
EXPECT_EQ(result, XR_SUCCESS);
EXPECT_EQ(poseState.isActive, XR_FALSE);
}Extension Testing
OpenXR extensions (XR_EXT_*, XR_KHR_*, XR_FB_*) are optional and runtime-dependent. Test your extension usage defensively:
// test/extension_tests.cpp
TEST(ExtensionTest, HandTracking_GracefulFallback) {
XrMockRuntime mock;
mock.SetSupportedExtensions({
// Deliberately exclude XR_EXT_hand_tracking
"XR_KHR_vulkan_enable2",
"XR_EXT_debug_utils",
});
XrApp app(&mock);
app.Initialize();
// App should detect missing extension and disable hand tracking UI
EXPECT_FALSE(app.IsHandTrackingAvailable());
EXPECT_FALSE(app.IsHandTrackingUIVisible());
}
TEST(ExtensionTest, EyeTracking_RequestsCorrectPermission) {
XrMockRuntime mock;
mock.SetSupportedExtensions({"XR_EXT_eye_gaze_interaction"});
XrApp app(&mock);
app.Initialize();
// Verify the app requested the eye tracking action space
EXPECT_TRUE(mock.WasSpaceCreated(XR_REFERENCE_SPACE_TYPE_VIEW));
EXPECT_TRUE(app.IsEyeTrackingEnabled());
}
// Test extension negotiation order
TEST(ExtensionTest, PicksPreferredCompositorExtension) {
XrMockRuntime mock;
// Both depth and alpha-blend available — app should pick depth
mock.SetSupportedExtensions({
"XR_KHR_composition_layer_depth",
"XR_KHR_composition_layer_equirect",
});
XrApp app(&mock);
app.Initialize();
EXPECT_EQ(app.GetActiveCompositorExtension(), "XR_KHR_composition_layer_depth");
}Cross-Platform Validation with Python
For system-level cross-platform testing, a Python harness using ctypes to call into the OpenXR loader is a useful complement to C++ unit tests. This lets you write data-driven tests that run the same scenario against multiple runtime configurations:
# tests/test_session_lifecycle.py
import ctypes
import os
import pytest
RUNTIMES = [
{"XR_RUNTIME_JSON": "/usr/share/openxr/1/openxr_monado.json", "name": "Monado"},
# Add others when hardware is available in your lab
]
@pytest.mark.parametrize("runtime", RUNTIMES, ids=lambda r: r["name"])
def test_session_create_destroy_cycle(runtime, tmp_path):
"""Session create/destroy must not leak XrSpace or swapchain objects."""
env = os.environ.copy()
env["XR_RUNTIME_JSON"] = runtime["XR_RUNTIME_JSON"]
import subprocess
result = subprocess.run(
["./build/xr_lifecycle_test", "--cycles", "10"],
env=env,
capture_output=True,
text=True,
timeout=60,
)
assert result.returncode == 0, \
f"Lifecycle test failed on {runtime['name']}:\n{result.stderr}"
assert "LEAK" not in result.stdout, \
f"Object leak detected on {runtime['name']}"
@pytest.mark.parametrize("runtime", RUNTIMES, ids=lambda r: r["name"])
def test_frame_timing_variance(runtime):
"""Frame display period must not vary more than 10% from nominal."""
import subprocess
result = subprocess.run(
["./build/xr_frame_timer", "--frames", "120", "--json"],
env={**os.environ, "XR_RUNTIME_JSON": runtime["XR_RUNTIME_JSON"]},
capture_output=True, text=True, timeout=30,
)
import json
data = json.loads(result.stdout)
nominal_ns = data["predicted_display_period_ns"]
variance = data["display_period_variance_percent"]
assert variance < 10.0, \
f"Frame period variance {variance:.1f}% exceeds 10% on {runtime['name']}"Automated Frame Timing Tests
Dropped frames in VR cause discomfort. Even without a physical headset, you can test that your render loop stays within budget using a synthetic timing harness:
// test/frame_timing_test.cpp
TEST(FrameTiming, RenderLoopStaysWithinBudget_72Hz) {
XrMockRuntime mock;
// Configure 72Hz predicted display period (13.88ms = 13,888,888 ns)
mock.SetDisplayPeriod(13'888'888);
// Queue 100 frames with incrementing timestamps
for (int i = 0; i < 100; i++) {
mock.PushFrame({
.predictedDisplayTime = 1'000'000'000LL + (i * 13'888'888LL),
.predictedDisplayPeriod = 13'888'888,
.shouldRender = XR_TRUE,
});
}
XrApp app(&mock);
app.Initialize();
auto start = std::chrono::high_resolution_clock::now();
app.RunFrameLoop(100);
auto elapsed = std::chrono::high_resolution_clock::now() - start;
// Each frame should complete well under the display period
double ms_per_frame = std::chrono::duration<double, std::milli>(elapsed).count() / 100.0;
EXPECT_LT(ms_per_frame, 12.0) // 12ms headroom within 13.88ms budget
<< "Average frame time " << ms_per_frame << "ms exceeds budget";
// Verify end_frame was called for every frame (no skips)
EXPECT_EQ(mock.endFrameCallCount, 100);
}CI Pipeline for Multi-Runtime Testing
# .github/workflows/openxr-tests.yml
name: OpenXR Tests
on: [push, pull_request]
jobs:
unit-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
submodules: recursive
- name: Install dependencies
run: |
sudo apt-get install -y \
libopenxr-dev libvulkan-dev \
googletest libgtest-dev \
monado
- name: Build
run: |
cmake -B build -DBUILD_TESTS=ON -DCMAKE_BUILD_TYPE=Release
cmake --build build --parallel
- name: Run unit tests (mock runtime)
run: ./build/test/xr_unit_tests --gtest_output=xml:test-results/unit.xml
- name: Run integration tests (Monado headless)
run: |
export XR_RUNTIME_JSON=/usr/share/openxr/1/openxr_monado.json
pytest tests/ -v --tb=short \
--junitxml=test-results/integration.xml
- name: Publish test results
uses: mikepenz/action-junit-report@v4
if: always()
with:
report_paths: 'test-results/*.xml'Where to Focus
The highest-value OpenXR tests, in priority order:
- Session lifecycle (create → ready → synchronized → visible → focused → stopping → idle → exiting)
- Action binding coverage for each target interaction profile
- Extension availability checks and fallbacks
- Swapchain acquire/wait/release sequencing
- Reference space validity after recenter events
- Frame timing under simulated load
Skip pixel-perfect rendering tests at this layer — those require hardware and belong in device lab testing or visual regression tools.
HelpMeTest can monitor your CI pipeline's OpenXR test results across multiple runtime configurations, surfacing flaky tests and runtime-specific regressions before they reach users.