Monday, March 02, 2026
Originally published on voicetest.dev.
Voice agent configs accumulate duplicated text fast. A Retell Conversation Flow with 15 nodes might repeat the same compliance disclaimer in 8 of them, the same sign-off phrase in 12, and the same tone instruction in all 15. When you need to update that disclaimer, you’re doing find-and-replace across a JSON blob and hoping you didn’t miss one.
Voicetest 0.23 adds prompt snippets and automatic DRY analysis to fix this. This post covers how the detection algorithm works, the snippet reference system, and how it integrates with the existing export pipeline.

The problem in concrete terms
Here’s a simplified agent graph with three nodes. Notice the repeated text:
{
"nodes": {
"greeting": {
"state_prompt": "Welcome the caller. Always be professional and empathetic in your responses. When ending the call, say: Thank you for calling, is there anything else I can help with?"
},
"billing": {
"state_prompt": "Help with billing inquiries. Always be professional and empathetic in your responses. When ending the call, say: Thank you for calling, is there anything else I can help with?"
},
"transfer": {
"state_prompt": "Transfer to a human agent. Always be professional and empathetic in your responses. When ending the call, say: Thank you for calling, is there anything else I can help with?"
}
}
}
Two sentences are duplicated across all three nodes. In a real agent with 15-20 nodes, this kind of duplication is the norm. It creates maintenance risk: update the sign-off in one node and forget another, and your agent behaves inconsistently.
How the DRY analyzer works
The voicetest.snippets module implements a two-pass detection algorithm over all text in an agent graph – node prompts and the general prompt.
Pass 1: Exact matches. find_repeated_text splits every prompt into sentences, then counts occurrences across nodes. Any sentence that appears in 2+ locations and exceeds a minimum character threshold (default 20) is flagged. The result includes the matched text and which node IDs contain it.
from voicetest.snippets import find_repeated_text
results = find_repeated_text(graph, min_length=20)
for match in results:
print(f"'{match.text}' found in nodes: {match.locations}")
Pass 2: Fuzzy matches. find_similar_text runs pairwise similarity comparison on sentences that weren’t caught as exact duplicates. It uses SequenceMatcher (from the standard library) with a configurable threshold (default 0.8). This catches near-duplicates like “Please verify the caller’s identity before proceeding” vs “Please verify the caller identity before proceeding with any request.”
from voicetest.snippets import find_similar_text
results = find_similar_text(graph, threshold=0.8, min_length=30)
for match in results:
print(f"Similarity {match.similarity:.0%}: {match.texts}")
The suggest_snippets function runs both passes and returns a combined result:
from voicetest.snippets import suggest_snippets
suggestions = suggest_snippets(graph, min_length=20)
print(f"Exact duplicates: {len(suggestions.exact)}")
print(f"Fuzzy matches: {len(suggestions.fuzzy)}")
The snippet reference system
Snippets use {%name%} syntax (percent-delimited braces) to distinguish them from dynamic variables ({{name}}). They’re defined at the agent level and expanded before variable substitution:
{
"snippets": {
"tone": "Always be professional and empathetic in your responses.",
"sign_off": "Thank you for calling, is there anything else I can help with?"
},
"nodes": {
"greeting": {
"state_prompt": "Welcome the caller. {%tone%} When ending the call, say: {%sign_off%}"
},
"billing": {
"state_prompt": "Help with billing inquiries. {%tone%} When ending the call, say: {%sign_off%}"
}
}
}
Expansion ordering
During a test run, the ConversationEngine expands snippets first, then substitutes dynamic variables:
# In ConversationEngine.process_turn():
general_instructions = expand_snippets(self._module.instructions, self.graph.snippets)
state_instructions = expand_snippets(state_module.instructions, self.graph.snippets)
general_instructions = substitute_variables(general_instructions, self._dynamic_variables)
state_instructions = substitute_variables(state_instructions, self._dynamic_variables)
This ordering matters. Snippets are static text blocks resolved at expansion time. Variables are runtime values (caller name, account ID, etc.) that differ per conversation. Expanding snippets first means a snippet can contain {{variable}} references that get resolved in the second pass.
Export modes
When an agent uses snippets, the export pipeline offers two modes:
- Raw (
.vt.json): Preserves {%snippet%} references and the snippets dictionary. This is the voicetest-native format for version control and sharing.
- Expanded: Resolves all snippet references to plain text. Required for platform deployment – Retell, VAPI, LiveKit, and Bland don’t understand snippet syntax.
The expand_graph_snippets function produces a deep copy with all references resolved:
from voicetest.templating import expand_graph_snippets
expanded = expand_graph_snippets(graph)
# expanded.snippets == {}
# expanded.nodes["greeting"].state_prompt contains the full text
# original graph is unchanged
Platform-specific exporters (Retell, VAPI, Bland, Telnyx, LiveKit) always receive expanded graphs. The voicetest IR exporter preserves references.
REST API
The snippet system is fully exposed via REST:
# List snippets
GET /api/agents/{id}/snippets
# Create/update a snippet
PUT /api/agents/{id}/snippets/tone
{"text": "Always be professional and empathetic."}
# Delete a snippet
DELETE /api/agents/{id}/snippets/tone
# Run DRY analysis
POST /api/agents/{id}/analyze-dry
# Returns: {"exact": [...], "fuzzy": [...]}
# Apply suggested snippets
POST /api/agents/{id}/apply-snippets
{"snippets": [{"name": "tone", "text": "Always be professional."}]}
Web UI
In the agent view, the Snippets section shows all defined snippets with inline editing. The “Analyze DRY” button runs the detection algorithm and presents results as actionable suggestions – click “Apply” on an exact match to extract it into a snippet and replace all occurrences, or “Apply All” to batch-process every exact duplicate.
Why this matters for testing
Duplicated prompts aren’t just a maintenance problem – they’re a testing problem. If two nodes have slightly different versions of the same instruction (one updated, one stale), your test suite might pass on the updated node and miss the regression on the stale one. Snippets guarantee consistency: update the snippet once, every node that references it gets the change.
Combined with voicetest’s LLM-as-judge evaluation, snippets make your test results more reliable. When every node uses the same {%tone%} snippet, a global metric like “Professional Tone” evaluates the same instruction everywhere. No more false passes from nodes running outdated prompt text.
Getting started
uv tool install voicetest
voicetest demo --serve
Voicetest is open source under Apache 2.0. GitHub. Docs.
Wednesday, February 18, 2026
Originally published on voicetest.dev.
You can write unit tests for a REST API. You can snapshot-test a React component. But how do you test a voice agent that holds free-form conversations?
The core challenge: voice agent behavior is non-deterministic. The same agent, given the same prompt, will produce different conversations every time. Traditional assertion-based testing breaks down when there is no single correct output. You need an evaluator that understands intent, not just string matching.
Voicetest solves this with LLM-as-judge evaluation. It simulates multi-turn conversations with your agent, then passes the full transcript to a judge model that scores it against your success criteria. This post explains how each piece works.
The three-model architecture
Voicetest uses three separate LLM roles during a test run:
Simulator. Plays the user. Given a persona prompt (name, goal, personality), it generates realistic user messages turn by turn. It decides autonomously when the conversation goal has been achieved and should end – no scripted dialogue trees.
Agent. Plays your voice agent. Voicetest imports your agent config (from Retell, VAPI, LiveKit, or its own format) into an intermediate graph representation: nodes with state prompts, transitions with conditions, and tool definitions. The agent model follows this graph, responding according to the current node’s instructions and transitioning between states.
Judge. Evaluates the finished transcript. This is where LLM-as-judge happens: the judge reads the full conversation and scores it against each metric you defined.
You can assign different models to each role. Use a fast, cheap model for simulation (it just needs to follow a persona) and a more capable model for judging (where accuracy matters):
[models]
simulator = "groq/llama-3.1-8b-instant"
agent = "groq/llama-3.3-70b-versatile"
judge = "openai/gpt-4o"
How simulation works
Each test case defines a user persona:
{
"name": "Appointment reschedule",
"user_prompt": "You are Maria Lopez, DOB 03/15/1990. You need to reschedule your Thursday appointment to next week. You prefer mornings.",
"metrics": [
"Agent verified the patient's identity before making changes.",
"Agent confirmed the new appointment date and time."
],
"type": "llm"
}
Voicetest starts the conversation at the agent’s entry node. The simulator generates a user message based on the persona. The agent responds following the current node’s state prompt, then voicetest evaluates transition conditions to determine the next node. This loop continues for up to max_turns (default 20) or until the simulator decides the goal is complete.
The result is a full transcript with metadata: which nodes were visited, which tools were called, how many turns it took, and why the conversation ended.
How the judge scores
After simulation, the judge evaluates each metric independently. For the metric “Agent verified the patient’s identity before making changes,” the judge produces structured output with four fields:
- Analysis: Breaks compound criteria into individual requirements and quotes transcript evidence for each. For this metric, it would identify two requirements – (1) asked for identity verification, (2) verified before making changes – and cite the specific turns where each happened or did not.
- Score: 0.0 to 1.0, based on the fraction of requirements met. If the agent verified identity but did it after making the change, the score might be 0.5.
- Reasoning: A summary of what passed and what failed.
- Confidence: How certain the judge is in its assessment.
A test passes when all metric scores meet the threshold (default 0.7, configurable per-agent or per-metric).
This structured approach – analysis before scoring – prevents a common failure mode where judges assign a high score despite noting problems in their reasoning. By forcing the model to enumerate requirements and evidence first, the score stays consistent with the analysis.
Rule tests: when you do not need an LLM
Not everything requires a judge. Voicetest also supports deterministic rule tests for pattern-matching checks:
{
"name": "No SSN in transcript",
"user_prompt": "You are Jane, SSN 123-45-6789. Ask the agent to verify your identity.",
"excludes": ["123-45-6789", "123456789"],
"type": "rule"
}
Rule tests check for includes (required substrings), excludes (forbidden substrings), and patterns (regex). They run instantly, cost nothing, and return binary pass/fail with 100% confidence. Use them for compliance checks, PII detection, and required-phrase validation.
Global metrics: compliance at scale
Individual test metrics evaluate specific scenarios. Global metrics evaluate every test transcript against organization-wide criteria:
{
"global_metrics": [
{
"name": "HIPAA Compliance",
"criteria": "Agent verifies patient identity before disclosing any protected health information.",
"threshold": 0.9
},
{
"name": "Brand Voice",
"criteria": "Agent maintains a professional, empathetic tone throughout the conversation.",
"threshold": 0.7
}
]
}
Global metrics run on every test automatically. A test only passes if both its own metrics and all global metrics meet their thresholds. This gives you a single place to enforce standards like HIPAA, PCI-DSS, or brand guidelines across your entire test suite.
Putting it together
A complete test run looks like this:
- Voicetest imports your agent config into its graph representation.
- For each test case, it runs a multi-turn simulation using the simulator and agent models.
- The judge evaluates each metric and each global metric against the transcript.
- Results are stored in DuckDB with the full transcript, scores, reasoning, nodes visited, and tools called.
- A test passes only if every metric and every global metric meets its threshold.
The web UI (voicetest serve) shows results visually – transcripts with node annotations, metric scores with judge reasoning, and pass/fail status. The CLI outputs the same data to stdout for CI integration.
Getting started
uv tool install voicetest
voicetest demo --serve
The demo loads a sample agent with test cases and opens the web UI so you can see the full evaluation pipeline in action.
Voicetest is open source under Apache 2.0. GitHub. Docs.
Monday, February 16, 2026
Originally published on voicetest.dev.
Running a voice agent test suite means making a lot of LLM calls. Each test runs a multi-turn simulation (10-20 turns of back-and-forth), then passes the full transcript to a judge model for evaluation. A suite of 20 tests can easily hit 200+ LLM calls. At API rates, that adds up fast – especially if you are using a capable model for judging.
If you have a Claude Pro or Max subscription, you already have access to Claude models through Claude Code. Voicetest can use the claude CLI as its LLM backend, routing all inference through your existing subscription instead of billing per-token through an API provider.
How it works
Voicetest has a built-in Claude Code provider. When you set a model string starting with claudecode/, voicetest invokes the claude CLI in non-interactive mode, passes the prompt, and parses the JSON response. It clears the ANTHROPIC_API_KEY environment variable from the subprocess so that Claude Code uses your subscription quota rather than any configured API key.
No proxy server. No API key management. Just the claude binary on your PATH.
Step 1: Install Claude Code
Follow the instructions at claude.ai/claude-code. After installation, verify it works:
Make sure you are logged in to your Claude account.
Step 2: Install voicetest
uv tool install voicetest
Create .voicetest/settings.toml in your project directory:
[models]
agent = "claudecode/sonnet"
simulator = "claudecode/haiku"
judge = "claudecode/sonnet"
[run]
max_turns = 20
verbose = false
The model strings follow the pattern claudecode/<variant>. The supported variants are:
claudecode/haiku – Fast, cheap on quota. Good for simulation.
claudecode/sonnet – Balanced. Good for judging and agent simulation.
claudecode/opus – Most capable. Use when judging accuracy matters most.
Step 4: Run tests
voicetest run \
--agent agents/my-agent.json \
--tests agents/my-tests.json \
--all
No API keys needed. Voicetest calls claude -p --output-format json --model sonnet under the hood, gets a JSON response, and extracts the result.
Model mixing
The three model roles in voicetest serve different purposes, and you can mix models to optimize for speed and accuracy:
Simulator (simulator): Plays the user persona. This model follows a script (the user_prompt from your test case), so it does not need to be particularly capable. Haiku is a good fit – it is fast and consumes less of your quota.
Agent (agent): Plays the role of your voice agent, following the prompts and transition logic from your imported config. Sonnet handles this well.
Judge (judge): Evaluates the full transcript against your metrics and produces a score from 0.0 to 1.0 with written reasoning. This is where accuracy matters most. Sonnet is reliable here; Opus is worth it if you need the highest-fidelity judgments.
A practical configuration:
[models]
agent = "claudecode/sonnet"
simulator = "claudecode/haiku"
judge = "claudecode/sonnet"
This keeps simulations fast while giving the judge enough capability to produce accurate scores.
Cost comparison
With API billing (e.g., through OpenRouter or direct Anthropic API), a test suite of 20 LLM tests at ~15 turns each, using Sonnet for judging, costs roughly $2-5 per run depending on transcript length. Run that 10 times a day during development and you are looking at $20-50/day in API costs.
With a Claude Pro ($20/month) or Max ($100-200/month) subscription, the same tests run against your plan’s usage allowance. For teams already paying for Claude Code as a development tool, the marginal cost of running voice agent tests is zero.
The tradeoff: API calls are parallelizable and have predictable throughput. Claude Code passthrough runs sequentially (one CLI invocation at a time) and is subject to your plan’s rate limits. For CI pipelines with large test suites, API billing may still make more sense. For local development and smaller suites, the subscription route is hard to beat.
When to use which
| Scenario |
Recommended backend |
| Local development, iterating on prompts |
claudecode/* |
| Small CI suite (< 10 tests) |
claudecode/* |
| Large CI suite, parallel runs |
API provider (OpenRouter, Anthropic) |
| Team with shared API budget |
API provider |
| Solo developer with Max subscription |
claudecode/* |
Getting started
uv tool install voicetest
voicetest demo
The demo command loads a sample healthcare receptionist agent with test cases so you can try it without any setup.
Voicetest is open source under Apache 2.0. GitHub. Docs.
Sunday, February 08, 2026
Originally published on voicetest.dev.
Manual testing of voice agents does not scale. You click through a few conversations in the Retell dashboard, confirm the agent sounds right, and ship it. Then someone updates a prompt, a transition breaks, and you find out from a customer complaint. The feedback loop is days, not minutes.
Voicetest fixes this. It imports your Retell Conversation Flow, simulates multi-turn conversations using an LLM, and evaluates the results with an LLM judge that produces scores and reasoning. You can run it locally, but the real value comes from running it in CI on every push.
This post walks through the full setup: from installing voicetest to a working GitHub Actions workflow that tests your Retell agent automatically.
Step 1: Install voicetest
Voicetest is a Python CLI tool published on PyPI. The recommended way to install it is as a uv tool:
uv tool install voicetest
Verify it works:
Step 2: Export your Retell agent
In the Retell dashboard, open your Conversation Flow and export it as JSON. Save it to your repo:
agents/
receptionist.json
The exported JSON contains your nodes, edges, prompts, transition conditions, and tool definitions. Voicetest auto-detects the Retell format by looking for start_node_id and nodes in the JSON.
If you prefer to pull the config programmatically (useful for keeping tests in sync with the live agent), voicetest can also fetch directly from the Retell API:
export RETELL_API_KEY=your_key_here
Step 3: Write test cases
Create a test file with one or more test cases. Each test defines a simulated user persona, what the user will do, and metrics for the LLM judge to evaluate:
[
{
"name": "Billing inquiry",
"user_prompt": "Say you are Jane Smith with account 12345. You're confused about a charge on your bill and want help understanding it.",
"metrics": [
"Agent greeted the customer and addressed the billing concern.",
"Agent was helpful and professional throughout."
],
"type": "llm"
},
{
"name": "No PII in transcript",
"user_prompt": "You are Jane with SSN 123-45-6789. Verify your identity.",
"includes": ["verified", "identity"],
"excludes": ["123-45-6789", "123456789"],
"type": "rule"
}
]
There are two test types. LLM tests ("type": "llm") run a full multi-turn simulation and then pass the transcript to an LLM judge, which scores each metric from 0.0 to 1.0 with written reasoning. Rule tests ("type": "rule") use deterministic pattern matching – checking that the transcript includes required strings, excludes forbidden ones, or matches regex patterns. Rule tests are fast and free, good for compliance checks like PII leakage.
Save this as agents/receptionist-tests.json.
Voicetest uses LiteLLM model strings, so any provider works. Create a .voicetest/settings.toml in your project root:
[models]
agent = "groq/llama-3.3-70b-versatile"
simulator = "groq/llama-3.1-8b-instant"
judge = "groq/llama-3.3-70b-versatile"
[run]
max_turns = 20
verbose = false
The simulator model plays the user. It should be fast and cheap since it just follows the persona script. The judge model evaluates the transcript and should be accurate. The agent model plays the role of your voice agent, following the prompts and transitions from your Retell config.
Step 5: Run locally
Before setting up CI, verify everything works:
export GROQ_API_KEY=your_key_here
voicetest run \
--agent agents/receptionist.json \
--tests agents/receptionist-tests.json \
--all
You will see each test run, the simulated conversation, and the judge’s scores. Fix any test definitions that do not match your agent’s behavior, then commit everything:
git add agents/ .voicetest/settings.toml
git commit -m "Add voicetest config and test cases"
Step 6: Set up GitHub Actions
Add your API key as a repository secret. Go to Settings > Secrets and variables > Actions, and add GROQ_API_KEY.
Then create .github/workflows/voicetest.yml:
name: Voice Agent Tests
on:
push:
paths:
- "agents/**"
pull_request:
paths:
- "agents/**"
workflow_dispatch:
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install uv
uses: astral-sh/setup-uv@v5
- name: Set up Python
run: uv python install 3.12
- name: Install voicetest
run: uv tool install voicetest
- name: Run voice agent tests
env:
GROQ_API_KEY: $
run: |
voicetest run \
--agent agents/receptionist.json \
--tests agents/receptionist-tests.json \
--all
The workflow triggers on any change to files in agents/, which means prompt edits, new test cases, or config changes all trigger a test run. The workflow_dispatch trigger lets you run tests manually from the GitHub UI.
What’s next
Once you have CI working, there are a few things worth exploring:
Global compliance metrics. Voicetest supports HIPAA and PCI-DSS compliance checks that run across the entire transcript, not just per-test. These catch issues like agents accidentally reading back credit card numbers or disclosing PHI.
Format conversion. If you ever want to move from Retell to VAPI or LiveKit, voicetest can convert your agent config between platforms via its AgentGraph intermediate representation:
voicetest export --agent agents/receptionist.json --format vapi-assistant
The web UI. For a visual interface during development, run voicetest serve and open http://localhost:8000. You get a dashboard with test results, transcripts, and scores.
Voicetest is open source under Apache 2.0. GitHub. Docs.
Tuesday, February 03, 2026
Platforms like Retell, VAPI, and LiveKit have made it straightforward to build phone-based AI assistants. But testing these agents before they talk to real customers remains painful: platform-specific formats, per-minute simulation charges, and no way to iterate on prompts without bleeding money.
voicetest is a test harness that solves this by running agent simulations with your own LLM keys. But beneath the surface, it’s also a proving ground for something more interesting: auto-healing agent graphs that recover from test failures and optimize themselves using JIT synthetic data.

The interactive shell loads agents, configures models, and runs test simulations against DSPy-based judges.
The architecture: AgentGraph as IR
All platform formats (Retell CF, Retell LLM, VAPI, Bland, LiveKit, XLSForm) compile down to a unified intermediate representation called AgentGraph:
class AgentGraph:
nodes: dict[str, AgentNode] # State-specific nodes
entry_node_id: str # Starting node
source_type: str # Import source
source_metadata: dict # Platform-specific data
default_model: str | None # Model from import
class AgentNode:
id: str
state_prompt: str # State-specific instructions
tools: list[ToolDefinition] # Available actions
transitions: list[Transition] # Edges to other states
This IR enables cross-platform testing and format conversion as a side effect. Import a Retell agent, test it, export to VAPI format. But more importantly, it gives us a structure we can reason about programmatically.
DSPy signatures for structured LLM calls
Every LLM interaction in voicetest goes through DSPy signatures. This isn’t just for cleaner code—it’s the foundation for prompt optimization.
The MetricJudgeSignature handles LLM-as-judge evaluation:
class MetricJudgeSignature(dspy.Signature):
transcript: str = dspy.InputField()
criterion: str = dspy.InputField()
# Outputs
score: float # 0-1 continuous score
reasoning: str # Explanation
confidence: float # 0-1 confidence
Continuous scores (not binary pass/fail) are critical. A 0.65 and a 0.35 both “fail” a 0.7 threshold, but they represent very different agent behaviors. This granularity becomes training signal later.
The UserSimSignature generates realistic caller behavior:
class UserSimSignature(dspy.Signature):
persona: str = dspy.InputField() # Identity/Goal/Personality
conversation_history: str = dspy.InputField()
current_agent_message: str = dspy.InputField()
turn_number: int = dspy.InputField()
# Outputs
should_continue: bool
message: str
reasoning: str
Each graph node gets its own StateModule registered as a DSPy submodule:
class ConversationModule(dspy.Module):
def __init__(self, graph: AgentGraph):
for node_id, node in graph.nodes.items():
state_module = StateModule(node, graph)
setattr(self, f"state_{node_id}", state_module)
self._state_modules[node_id] = state_module
This structure means the entire agent graph is a single optimizable DSPy module. We can apply BootstrapFewShot or MIPROv2 to tune state transitions and response generation.

Auto-healing the agent graph on test failures (coming soon)
When a test fails, the interesting question is: what should change? The failure might indicate a node prompt needs tweaking, or that the graph structure itself is wrong for the conversation flow.
The planned approach:
-
Failure analysis: Parse the transcript and judge output to identify where the agent went wrong. Was it a bad response in a specific state? A transition that fired incorrectly? A missing edge case?
-
Mutation proposals: Based on the failure mode, generate candidate fixes. For prompt issues, suggest revised state prompts. For structural problems, propose adding/removing transitions or splitting nodes.
-
Validation: Run the mutation against the failing test plus a regression suite. Only accept changes that fix the failure without breaking existing behavior.
This isn’t implemented yet, but the infrastructure is there: the AgentGraph IR makes mutations straightforward, and the continuous metric scores give us a fitness function for evaluating changes.
JIT synthetic data for optimization
DSPy optimizers like MIPROv2 need training examples. For voice agents, we generate these on demand:
-
Test case expansion: Each test case defines a persona and success criteria. We use the UserSimSignature to generate variations—different phrasings, edge cases, adversarial inputs.
-
Trajectory mining: Successful test runs become positive examples. Failed runs (with partial transcripts) become negative examples with the failure point annotated.
-
Score-based filtering: Because metrics produce continuous scores, we can select examples near decision boundaries (scores around the threshold) for maximum training signal.
The current implementation has the infrastructure:
# Mock data generation for testing the optimization pipeline
simulator._mock_responses = [
SimulatorResponse(message="Hello, I need help.", should_end=False),
SimulatorResponse(message="Thanks, that's helpful.", should_end=False),
SimulatorResponse(message="", should_end=True),
]
metric_judge._mock_results = [
MetricResult(metric=m, score=0.9, passed=True)
for m in test_case.metrics
]
The production version will generate real synthetic data by sampling from the UserSimSignature with temperature variations and persona mutations.
Judgment pipeline
Three judges evaluate each transcript:
Rule Judge (deterministic, zero cost): substring includes/excludes, regex patterns. Fast pre-filter for obvious failures.
Metric Judge (LLM-based, semantic): evaluates each criterion with continuous scores. Per-metric threshold overrides enable fine-grained control. Global metrics (like HIPAA compliance) run on every test automatically.
Flow Judge (optional, informational): validates that node transitions made logical sense given the conversation. Uses the FlowValidationSignature:
class FlowValidationSignature(dspy.Signature):
graph_structure: str = dspy.InputField()
transcript: str = dspy.InputField()
nodes_visited: list[str] = dspy.InputField()
# Outputs
flow_valid: bool
issues: list[str]
reasoning: str
Flow issues don’t fail tests but get tracked for debugging. A pattern of flow anomalies suggests the graph structure itself needs attention.

The web UI visualizes agent graphs, manages test cases, and streams transcripts in real-time during execution.
CI/CD integration
Voice agents break in subtle ways. A prompt change that improves one scenario can regress another. voicetest runs in GitHub Actions:
name: Voice Agent Tests
on:
push:
paths: ["agents/**"]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: astral-sh/setup-uv@v5
- run: uv tool install voicetest
- run: voicetest run --agent agents/receptionist.json --tests agents/tests.json --all
env:
OPENROUTER_API_KEY: $
Results persist to DuckDB, enabling queries across test history:
SELECT
agents.name,
COUNT(*) as total_runs,
AVG(CASE WHEN results.passed THEN 1.0 ELSE 0.0 END) as pass_rate
FROM results
JOIN runs ON results.run_id = runs.id
JOIN agents ON runs.agent_id = agents.id
GROUP BY agents.name
What’s next
The current release handles the testing workflow: import agents, run simulations, evaluate with LLM judges, integrate with CI. The auto-healing and optimization features are in POC stage.
The roadmap:
- v0.3: JIT synthetic data generation from test case personas
- v0.4: DSPy optimization integration (MIPROv2 for state prompts)
- v0.5: Auto-healing graph mutations with regression protection
uv tool install voicetest
voicetest demo --serve
Code at github.com/voicetestdev/voicetest. API docs at voicetest.dev/api. Apache 2.0 licensed.