Apple Vision Pro App Testing: RealityKit and SwiftUI
visionOS introduces entirely new UI paradigms — volumetric windows, immersive spaces, eye tracking, and spatial audio — that existing iOS testing patterns don't cover. This guide walks through XCTest for visionOS, RealityKit entity hierarchy testing, SwiftUI volume and immersive space testing, eye tracking simulation in Simulator, and render budget verification.
Testing for Apple Vision Pro is testing for a platform where the interaction model, rendering pipeline, and coordinate system are all fundamentally different from anything that came before. Touches become eye gazes. 2D layouts become volumes. Scenes become immersive spaces. Your existing XCTest muscle memory still applies, but you need new patterns on top of it.
XCTest on visionOS
XCTest works on visionOS with the same API you know from iOS and macOS. Add a test target to your visionOS app target in Xcode, select the visionOS Simulator as the run destination, and your tests run exactly as expected.
The critical difference: visionOS apps run in a multi-process architecture where each window runs in its own process. Your app's main process hosts the App struct, and each WindowGroup or ImmersiveSpace runs in a separate process. Tests run in the app's process — you can test logic, RealityKit entities, and SwiftUI views, but you cannot directly inspect another window's process memory.
// Tests/AppLogicTests.swift
import XCTest
@testable import MyVisionApp
final class SpatialAudioTests: XCTestCase {
func testSpatialAudioSourcePositioning() throws {
let source = SpatialAudioSource(
position: SIMD3<Float>(1.0, 0.0, -2.0),
radius: 0.5
)
XCTAssertEqual(source.position.x, 1.0, accuracy: 0.001)
XCTAssertEqual(source.position.z, -2.0, accuracy: 0.001)
XCTAssertTrue(source.isWithinRange(SIMD3<Float>(0.8, 0.1, -1.9)))
XCTAssertFalse(source.isWithinRange(SIMD3<Float>(5.0, 0.0, -2.0)))
}
func testSpatialAudioFalloffCurve() throws {
let source = SpatialAudioSource(position: .zero, radius: 3.0)
let gain0 = source.gain(at: distance: 0.0)
let gain1 = source.gain(at: distance: 1.5)
let gain2 = source.gain(at: distance: 3.0)
let gain3 = source.gain(at: distance: 6.0)
XCTAssertEqual(gain0, 1.0, accuracy: 0.01)
XCTAssertGreaterThan(gain1, gain2)
XCTAssertGreaterThan(gain2, gain3)
XCTAssertEqual(gain3, 0.0, accuracy: 0.01)
}
}Testing RealityKit Entity Hierarchies
RealityKit entities are the scene graph nodes of visionOS. They compose hierarchically, carry components, and respond to system events. Testing them is straightforward — you do not need a live RealityView to construct and inspect an entity tree.
import XCTest
import RealityKit
@testable import MyVisionApp
final class EntityHierarchyTests: XCTestCase {
func testSolarSystemHierarchy() throws {
let solar = SolarSystemEntity()
// Verify entity count
XCTAssertEqual(solar.planets.count, 8)
// Verify hierarchy depth
let earth = try XCTUnwrap(solar.planet(named: "Earth"))
XCTAssertNotNil(earth.parent)
XCTAssertEqual(earth.parent?.name, "SolarSystem")
// Verify moon is a child of Earth
let moon = try XCTUnwrap(earth.findEntity(named: "Moon"))
XCTAssertEqual(moon.parent?.name, "Earth")
}
func testEntityComponents() throws {
let planet = PlanetEntity(name: "Mars", radius: 0.034)
// Verify required components are present
XCTAssertTrue(planet.components.has(ModelComponent.self))
XCTAssertTrue(planet.components.has(CollisionComponent.self))
XCTAssertTrue(planet.components.has(PhysicsBodyComponent.self))
// Verify collision shape matches radius
let collision = try XCTUnwrap(planet.components[CollisionComponent.self])
guard case .sphere(let r) = collision.shapes.first?.shapeResource else {
XCTFail("Expected sphere collision shape")
return
}
XCTAssertEqual(r, 0.034, accuracy: 0.001)
}
func testEntityTransformAfterAnimation() async throws {
let cube = ModelEntity(mesh: .generateBox(size: 0.1))
cube.position = SIMD3<Float>(0, 0, -1)
// Apply a transform animation
let targetTransform = Transform(
scale: .one,
rotation: simd_quatf(angle: .pi, axis: [0, 1, 0]),
translation: SIMD3<Float>(1, 0, -1)
)
let animation = FromToByAnimation(
to: targetTransform,
duration: 0.01, // very short for test speed
bindTarget: .transform
)
let resource = try AnimationResource.generate(with: animation)
let controller = cube.playAnimation(resource)
// Wait for animation to complete
try await Task.sleep(for: .milliseconds(50))
XCTAssertEqual(
cube.position.x, 1.0, accuracy: 0.05,
"Entity should have moved to target X position"
)
}
}SwiftUI Testing for Volumetric Windows
Volumetric windows render your View content in 3D space. ViewInspector (a popular third-party library) works on visionOS for inspecting SwiftUI view hierarchies in tests. For visionOS-specific types like Model3D and RealityView, you test the surrounding logic rather than the renderable content:
import XCTest
import ViewInspector
import SwiftUI
@testable import MyVisionApp
final class VolumetricWindowTests: XCTestCase {
func testProductCarouselShowsCorrectCount() throws {
let products = Product.sampleData(count: 5)
let view = ProductCarouselView(products: products)
let list = try view.inspect().find(ViewType.ForEach.self)
XCTAssertEqual(try list.count, 5)
}
func testEmptyStateIsVisibleWithNoProducts() throws {
let view = ProductCarouselView(products: [])
XCTAssertNoThrow(
try view.inspect().find(text: "No products available")
)
}
func testSelectedProductUpdatesViewModel() throws {
let viewModel = ProductViewModel()
let products = Product.sampleData(count: 3)
let view = ProductCarouselView(products: products)
.environmentObject(viewModel)
// Simulate selection tap
let firstCard = try view.inspect()
.find(ProductCardView.self)
try firstCard.callOnTapGesture()
XCTAssertEqual(viewModel.selectedProduct?.id, products[0].id)
}
}Testing Immersive Spaces
ImmersiveSpace scenes are harder to unit test directly because they require a visionOS runtime context to open. The strategy is to push logic out of the ImmersiveSpace body and into testable view models and service objects.
// ImmersiveSpaceViewModel.swift
@MainActor
@Observable
class ImmersiveSpaceViewModel {
var anchors: [AnchorEntity] = []
var isPlacementModeActive = false
var placementError: PlacementError? = nil
func requestPlacement(at transform: simd_float4x4) async {
guard isPlacementModeActive else {
placementError = .notInPlacementMode
return
}
let anchor = AnchorEntity(world: transform)
anchors.append(anchor)
}
}
// Tests
final class ImmersiveSpaceViewModelTests: XCTestCase {
@MainActor
func testPlacement_FailsWhenNotInPlacementMode() async {
let vm = ImmersiveSpaceViewModel()
vm.isPlacementModeActive = false
let identity = simd_float4x4(diagonal: [1, 1, 1, 1])
await vm.requestPlacement(at: identity)
XCTAssertEqual(vm.placementError, .notInPlacementMode)
XCTAssertTrue(vm.anchors.isEmpty)
}
@MainActor
func testPlacement_AddsAnchorWhenActive() async {
let vm = ImmersiveSpaceViewModel()
vm.isPlacementModeActive = true
let transform = simd_float4x4(diagonal: [1, 1, 1, 1])
await vm.requestPlacement(at: transform)
await vm.requestPlacement(at: transform)
XCTAssertEqual(vm.anchors.count, 2)
XCTAssertNil(vm.placementError)
}
}Reality Composer Pro Scene Validation
Reality Composer Pro exports .usda and .reality files. You can validate the exported scene structure in tests using RealityKit's Entity.load API (available on macOS for testing without a visionOS device):
// Tests run on macOS target for fast iteration
import XCTest
import RealityKit
final class SceneValidationTests: XCTestCase {
func testLivingRoomSceneLoadsWithExpectedEntities() async throws {
// Load from test bundle
let scene = try await Entity.load(named: "LivingRoom",
in: Bundle(for: Self.self))
// Verify required anchor points exist
XCTAssertNotNil(scene.findEntity(named: "SofaAnchor"))
XCTAssertNotNil(scene.findEntity(named: "TVAnchor"))
XCTAssertNotNil(scene.findEntity(named: "CoffeeTableAnchor"))
// Verify no entity exceeds polygon budget
let meshCount = countMeshTriangles(in: scene)
XCTAssertLessThan(meshCount, 100_000,
"Scene mesh complexity \(meshCount) triangles exceeds budget")
}
private func countMeshTriangles(in entity: Entity) -> Int {
var total = 0
if let model = entity.components[ModelComponent.self] {
for resource in model.mesh.contents.instances {
// Approximate — actual API varies by RealityKit version
total += resource.model?.parts.reduce(0) { $0 + ($1.triangleCount ?? 0) } ?? 0
}
}
entity.children.forEach { total += countMeshTriangles(in: $0) }
return total
}
}Eye Tracking Simulation in Simulator
The visionOS Simulator (Xcode 15+) supports eye tracking simulation via the simulator's "Focus" controls. For automated UI tests, XCUIApplication does not directly expose eye gaze as an input type, but you can use accessibility-based tap events on elements that respond to .hoverEffect — the simulator treats accessibility taps as focused selections:
import XCTest
final class EyeTrackingUITests: XCTestCase {
var app: XCUIApplication!
override func setUp() {
super.setUp()
app = XCUIApplication()
app.launch()
}
func testHoverEffectActivatesOnFocus() {
// XCUITest treats button tap as focus+select in visionOS Simulator
let button = app.buttons["Add to Scene"]
XCTAssertTrue(button.exists)
// Simulate look+pinch (tap in visionOS UI testing)
button.tap()
// Verify the action completed
XCTAssertTrue(app.staticTexts["Object Added"].waitForExistence(timeout: 2))
}
func testMenuOpensOnGaze() {
let menuTrigger = app.buttons["Options"]
menuTrigger.tap()
// Verify contextual menu appeared
let menu = app.otherElements["ContextualMenu"]
XCTAssertTrue(menu.waitForExistence(timeout: 1))
XCTAssertTrue(app.buttons["Delete"].exists)
XCTAssertTrue(app.buttons["Duplicate"].exists)
XCTAssertTrue(app.buttons["Share"].exists)
}
func testOrnamentsAreAccessible() {
// Ornaments are outside the window frame — test via accessibility identifier
let ornament = app.otherElements["ToolbarOrnament"]
XCTAssertTrue(ornament.waitForExistence(timeout: 2))
XCTAssertGreaterThan(ornament.buttons.count, 0)
}
}Performance Testing: Render Budget and Frame Timing
visionOS targets 90 fps for comfortable viewing. Xcode's XCTMetric and measure API applies to visionOS:
final class RenderPerformanceTests: XCTestCase {
func testSceneLoadTime() {
measure(metrics: [XCTClockMetric(), XCTMemoryMetric()]) {
let expectation = self.expectation(description: "Scene loaded")
Task {
_ = try? await Entity.load(named: "HeavyScene",
in: Bundle(for: Self.self))
expectation.fulfill()
}
wait(for: [expectation], timeout: 5.0)
}
}
func testParticleSystemMemory() {
let baseline = XCTMemoryMetric()
measure(metrics: [baseline]) {
var systems: [ParticleEmitterComponent] = []
for _ in 0..<50 {
var emitter = ParticleEmitterComponent()
emitter.mainEmitter.birthRate = 100
systems.append(emitter)
}
// Ensure no leak — systems should deallocate
systems.removeAll()
}
}
}For render frame timing on device, use Instruments with the "VR Compositor" instrument — this is not automatable via XCTest, but you can capture traces via xctrace in CI:
# Capture a 30-second trace on connected Vision Pro
xcrun xctrace record \
--device <device-udid> \
--template <span class="hljs-string">"visionOS Performance" \
--output trace.xctrace \
--time-limit 30s \
--attach <bundle-id>
<span class="hljs-comment"># Export frame time data for analysis
xcrun xctrace <span class="hljs-built_in">export --input trace.xctrace \
--xpath <span class="hljs-string">'//table[@schema-ref="vr-frame-timing"]' \
--output frame-times.csvSharePlay Testing
SharePlay features require multiple participants. Test the data synchronization logic in isolation:
final class SharePlayTests: XCTestCase {
func testGroupSessionSyncsStateToAllParticipants() async throws {
let coordinator = SceneSyncCoordinator()
let mockSession = MockGroupSession()
coordinator.configure(with: mockSession)
// Simulate a state change from the local user
coordinator.broadcastStateChange(.objectMoved(id: "cube-1", to: [1, 0, -2]))
// Verify the message was sent to all participants
XCTAssertEqual(mockSession.sentMessages.count, 1)
guard case .objectMoved(let id, let pos) = mockSession.sentMessages.first else {
XCTFail("Expected objectMoved message")
return
}
XCTAssertEqual(id, "cube-1")
}
}Putting It Together
The visionOS testing pyramid looks like:
- Unit tests (macOS target): entity logic, view models, data models, spatial math — fast, no simulator needed
- Integration tests (visionOS Simulator): SwiftUI view inspection, XCUIApplication flows, scene loading
- Device tests: render budget profiling, eye tracking accuracy, spatial audio perception — manual or via
xctrace
HelpMeTest can automate your companion iOS/web app's regression tests alongside your visionOS release cycle, so a backend change never silently breaks the non-spatial parts of your product.