Voice UI Testing: Accessibility, NLU Accuracy, and Multi-Turn Dialog Validation
Voice UI quality requires testing across five dimensions: NLU intent recognition accuracy (precision/recall/F1), multi-turn dialog flow completeness, slot filling edge cases and error recovery, accessibility compliance for voice, and cross-platform consistency across Alexa, Google, and Siri. This guide defines the test matrix and KPIs for each dimension.
Voice user interfaces fail differently from graphical UIs. A visual UI with a broken button is immediately visible; a voice UI with a broken NLU model just sounds confused. Users abandon voice apps faster than web apps — if the third reprompt doesn't resolve their issue, they're gone. Testing voice UI quality means quantifying ambiguity tolerance, measuring recovery paths, and validating that your spoken prompts are as clear as your written copy.
This guide builds a complete quality framework for voice interfaces — platform-agnostic principles that apply to Alexa Skills, Google Actions, Siri Shortcuts, and custom voice agents.
Dimension 1: NLU Intent Recognition Accuracy
The foundation of voice UI quality is whether your NLU correctly classifies what users say. This requires a labeled test set — utterances you've collected or synthesized, each with a ground-truth intent label.
Metrics to Track
| Metric | Formula | Target | Alert Threshold |
|---|---|---|---|
| Precision | TP / (TP + FP) | ≥ 0.90 | < 0.80 |
| Recall | TP / (TP + FN) | ≥ 0.88 | < 0.75 |
| F1 Score | 2 × (P × R) / (P + R) | ≥ 0.89 | < 0.77 |
| Confusion Rate | FP + FN / total | ≤ 0.12 | > 0.25 |
Track these per intent, not just globally. An aggregate F1 of 0.91 can hide a critical intent with F1 of 0.50 if that intent has low frequency in your test set.
Building a Test Utterance Set
For each intent, you need at minimum:
- 20 canonical utterances (the obvious phrasings)
- 10 paraphrase variants (the same intent expressed differently)
- 5 adversarial utterances (things that sound similar to other intents)
# scripts/evaluate_nlu.py
from sklearn.metrics import classification_report, confusion_matrix
import numpy as np
def evaluate_intent_model(model_fn, test_utterances: list[dict]) -> dict:
"""
model_fn: callable(utterance: str) -> str (predicted intent)
test_utterances: list of {text: str, intent: str}
"""
y_true = [u["intent"] for u in test_utterances]
y_pred = [model_fn(u["text"]) for u in test_utterances]
report = classification_report(y_true, y_pred, output_dict=True, zero_division=0)
# Find worst-performing intents
intent_f1 = {
intent: metrics["f1-score"]
for intent, metrics in report.items()
if intent not in ("accuracy", "macro avg", "weighted avg")
and isinstance(metrics, dict)
}
worst_intents = sorted(intent_f1.items(), key=lambda x: x[1])[:5]
return {
"macro_f1": report["macro avg"]["f1-score"],
"macro_precision": report["macro avg"]["precision"],
"macro_recall": report["macro avg"]["recall"],
"accuracy": report["accuracy"],
"worst_intents": worst_intents,
"full_report": report,
}
# Threshold assertions for CI
def assert_nlu_quality(results: dict, min_f1: float = 0.85):
assert results["macro_f1"] >= min_f1, (
f"Macro F1 {results['macro_f1']:.3f} below threshold {min_f1}\n"
f"Worst intents: {results['worst_intents']}"
)Confusion Matrix Analysis
A confusion matrix shows which intents are being mistaken for each other — essential for debugging NLU failures:
import matplotlib.pyplot as plt
import seaborn as sns
def plot_intent_confusion(y_true, y_pred, labels, output_path="confusion_matrix.png"):
cm = confusion_matrix(y_true, y_pred, labels=labels, normalize='true')
fig, ax = plt.subplots(figsize=(12, 10))
sns.heatmap(cm, annot=True, fmt='.2f', xticklabels=labels, yticklabels=labels,
ax=ax, cmap='Blues', vmin=0, vmax=1)
ax.set_ylabel('True Intent')
ax.set_xlabel('Predicted Intent')
ax.set_title('Intent Confusion Matrix (normalized)')
plt.tight_layout()
plt.savefig(output_path)
plt.close()High off-diagonal values between two intents (e.g., cancel_booking and modify_booking) tell you to add more discriminating training examples or consolidate the intents.
Dimension 2: Multi-Turn Dialog Flow Validation
Voice conversations are stateful. A user's seventh utterance depends on what happened in turns one through six. Multi-turn testing requires replaying complete conversation scenarios and asserting on the full state at each turn, not just the final response.
Dialog Flow Test Matrix
Define your flows as state machines and test every valid transition:
# tests/test_dialog_flows.py
import pytest
from dataclasses import dataclass
from typing import Optional
@dataclass
class ConversationTurn:
utterance: str
expected_intent: str
expected_response_pattern: str
expected_slots: dict
expected_next_state: str
allow_reprompt: bool = False
BOOKING_FLOW = [
ConversationTurn(
utterance="book a table",
expected_intent="book_table",
expected_response_pattern=r"(when|date|what day)",
expected_slots={},
expected_next_state="collect_date",
),
ConversationTurn(
utterance="this friday",
expected_intent="provide_date",
expected_response_pattern=r"(time|what time|when.*time)",
expected_slots={"date": "friday"},
expected_next_state="collect_time",
),
ConversationTurn(
utterance="seven thirty in the evening",
expected_intent="provide_time",
expected_response_pattern=r"(how many|party size|people)",
expected_slots={"date": "friday", "time": "19:30"},
expected_next_state="collect_party_size",
),
ConversationTurn(
utterance="just two of us",
expected_intent="provide_party_size",
expected_response_pattern=r"confirm",
expected_slots={"date": "friday", "time": "19:30", "party_size": 2},
expected_next_state="confirm_booking",
),
]
@pytest.mark.parametrize("flow,flow_name", [
(BOOKING_FLOW, "booking_happy_path"),
])
def test_dialog_flow(conversation_client, flow, flow_name):
session = conversation_client.new_session()
for i, turn in enumerate(flow):
response = session.send(turn.utterance)
assert re.search(turn.expected_response_pattern, response.text, re.IGNORECASE), (
f"Turn {i} ({flow_name}): utterance '{turn.utterance}'\n"
f" Expected pattern: {turn.expected_response_pattern!r}\n"
f" Got: {response.text!r}"
)
for slot_name, slot_value in turn.expected_slots.items():
actual = session.get_slot(slot_name)
assert actual == slot_value, (
f"Turn {i}: slot '{slot_name}' expected {slot_value!r}, got {actual!r}"
)Context Retention Testing
Test that context persists correctly across interruptions and topic changes:
def test_context_retained_after_digression(conversation_client):
session = conversation_client.new_session()
# Start booking
session.send("book a table for two on friday")
assert session.get_slot("party_size") == 2
assert session.get_slot("date") == "friday"
# Digression: ask about hours
response = session.send("wait, what are your opening hours?")
assert "hours" in response.text.lower() or "open" in response.text.lower()
# Return to booking — context should be preserved
response = session.send("ok, back to my booking")
assert session.get_slot("party_size") == 2 # Must not be lost
assert session.get_slot("date") == "friday" # Must not be lost
assert "time" in response.text.lower() # Should continue from where we left offDimension 3: Slot Filling Edge Cases and Error Recovery
Error recovery is where voice UIs succeed or fail. The key behaviors to test:
Elicitation limit: After N failed attempts to fill a required slot, the system should gracefully exit or offer alternatives — not loop infinitely.
Re-prompting variation: Each re-prompt should use different wording. Testing the same text twice signals a poorly designed VUI.
Out-of-vocabulary handling: What happens when a user says something completely unexpected during slot filling?
# Test elicitation limit
def test_elicitation_exits_after_max_retries(conversation_client):
MAX_RETRIES = 3
session = conversation_client.new_session()
session.send("book a table") # trigger date collection
prompts_seen = set()
for attempt in range(MAX_RETRIES + 1):
response = session.send("I don't know") # consistently bad input
if attempt < MAX_RETRIES:
prompts_seen.add(response.text)
else:
# After max retries, should exit gracefully
assert any(word in response.text.lower()
for word in ["sorry", "later", "help", "operator"]), \
f"Expected graceful exit after {MAX_RETRIES} retries, got: {response.text}"
# Re-prompt variation: shouldn't repeat exact same text
assert len(prompts_seen) > 1, \
f"Re-prompts should vary but all were identical: {prompts_seen}"
# Test out-of-vocabulary input during slot filling
@pytest.mark.parametrize("unexpected_input", [
"the quick brown fox",
"42",
"!@#$%",
"tell me a joke",
"", # empty/silence
])
def test_ood_input_during_slot_filling(conversation_client, unexpected_input):
session = conversation_client.new_session()
session.send("book a table") # trigger date collection
response = session.send(unexpected_input)
# Should not crash, should re-elicit
assert response.text is not None
assert len(response.text) > 0
assert session.current_state == "collect_date" # Still in same stateDimension 4: Voice Accessibility (WCAG for Voice)
Voice interfaces have accessibility requirements that differ from graphical UIs but are equally important. Key principles from WCAG 2.1 adapted for voice:
Perceivable: All information conveyed must be audible. Never rely solely on visual output (display cards) to communicate critical information.
Operable: Users must be able to navigate the entire flow using speech. Timeouts must be reasonable (WCAG 2.2.1: minimum 20 seconds for response).
Understandable: Error messages must explain what went wrong and how to fix it. "I didn't catch that" fails this criterion. "I didn't hear a date. Please say a date like 'next Friday' or 'June 15th'." passes.
Robust: The interface must work across device types — smart speakers, phone assistants, in-car systems with different acoustic environments.
Accessibility Test Cases
ACCESSIBILITY_TESTS = [
{
"name": "error_message_includes_correction_hint",
"trigger": "invalid_date_input",
"assertion": lambda r: any(word in r.lower()
for word in ["for example", "like", "such as", "try saying"]),
"rationale": "Error messages must include correction guidance (WCAG 3.3.1)",
},
{
"name": "no_timeout_under_20_seconds",
"trigger": "silence_after_prompt",
"assertion": lambda r: r.timeout_ms >= 20000,
"rationale": "Minimum response time per WCAG 2.2.1",
},
{
"name": "reprompt_available_after_timeout",
"trigger": "no_response_given",
"assertion": lambda r: r.reprompt_text is not None,
"rationale": "System must prompt again after silence",
},
{
"name": "critical_info_in_speech_not_only_card",
"trigger": "booking_confirmation",
"assertion": lambda r: r.booking_id in r.speech_output,
"rationale": "Confirmation number must be spoken, not only shown in card",
},
]Dimension 5: Latency Testing
Voice response latency is a user experience metric, not just an infrastructure metric. Users perceive latency differently in voice — silence feels longer than blank screen loading. The target is under 1 second for simple responses and under 3 seconds for complex webhook calls.
import time
import statistics
import pytest
def measure_response_latency(conversation_fn, utterances: list[str], runs: int = 10) -> dict:
latencies = []
for utterance in utterances:
for _ in range(runs):
start = time.perf_counter()
conversation_fn(utterance)
end = time.perf_counter()
latencies.append((end - start) * 1000) # ms
return {
"p50": statistics.median(latencies),
"p95": statistics.quantiles(latencies, n=20)[18], # 95th percentile
"p99": statistics.quantiles(latencies, n=100)[98],
"max": max(latencies),
}
def test_webhook_latency_sla(conversation_client):
stats = measure_response_latency(
conversation_fn=lambda u: conversation_client.send(u),
utterances=["book a table", "what time do you open", "cancel my reservation"],
runs=20,
)
assert stats["p95"] < 2000, f"p95 latency {stats['p95']:.0f}ms exceeds 2000ms SLA"
assert stats["p99"] < 3000, f"p99 latency {stats['p99']:.0f}ms exceeds 3000ms SLA"Cross-Platform Testing Matrix
When your voice experience runs on multiple platforms, you need a test matrix that covers platform-specific behaviors:
| Test Scenario | Alexa | Siri | Priority | |
|---|---|---|---|---|
| Happy path booking flow | ✓ | ✓ | ✓ | P0 |
| Slot re-prompting | ✓ | ✓ | — | P0 |
| Error recovery | ✓ | ✓ | ✓ | P0 |
| Session persistence | ✓ | ✓ | — | P1 |
| AudioPlayer / media | ✓ | — | — | P1 |
| Rich responses (cards) | ✓ | ✓ | — | P2 |
| Account linking | ✓ | ✓ | ✓ | P1 |
| Latency under 1s | ✓ | ✓ | ✓ | P0 |
Run your core P0 tests against every platform on every deployment. Run P1 and P2 in daily CI.
KPI Definitions for Voice UI Quality
Track these KPIs weekly:
- Intent Recognition Rate: % of utterances correctly classified (target: ≥ 92%)
- Task Completion Rate: % of conversations that reach a successful end state (target: ≥ 78%)
- Reprompt Rate: % of turns requiring a re-prompt (target: ≤ 15%)
- Abandonment Rate: % of sessions that end mid-flow without completion (target: ≤ 20%)
- P95 Response Latency: 95th percentile webhook response time (target: < 2000ms)
- Error Recovery Rate: % of error states from which users successfully recover (target: ≥ 65%)
HelpMeTest can continuously run your multi-turn voice conversation scenarios across platforms and alert you when any of these KPIs degrade — giving you production quality signals without waiting for user complaints.