Performance Testing Guide
Response Time Validation for MCP Servers
MCP Aegis supports performance testing with timing assertions, allowing you to validate that your Model Context Protocol servers meet specific response time requirements and SLA standards. For agent‑oriented latency considerations (multi‑step workflows, concurrency & buffer hygiene) see the AI Agent Testing guide.
Basic Performance Assertions
Add performance requirements to any test case using the performance section with maxResponseTime:
- it: "should list tools within reasonable time"
request:
jsonrpc: "2.0"
id: "perf-1"
method: "tools/list"
params: {}
expect:
response:
jsonrpc: "2.0"
id: "perf-1"
result:
tools: "match:type:array"
performance:
maxResponseTime: "500ms" # Must respond within 500ms
stderr: "toBeEmpty"Common Performance Patterns
Different operations have different expected performance characteristics:
Tool Listing Performance
Tool listing should be very fast as it's a metadata operation:
- it: "should list tools quickly"
request:
jsonrpc: "2.0"
id: "list-perf-1"
method: "tools/list"
params: {}
expect:
response:
result:
tools: "match:arrayLength:3"
performance:
maxResponseTime: "300ms" # Very fast for metadata
stderr: "toBeEmpty"Tool Execution Performance
Tool execution times depend on the complexity of the operation:
# Simple file operations
- it: "should read small file quickly"
request:
jsonrpc: "2.0"
id: "read-perf-1"
method: "tools/call"
params:
name: "read_file"
arguments:
path: "./data/hello.txt"
expect:
response:
result:
content:
- type: "text"
text: "match:startsWith:Hello"
isError: false
performance:
maxResponseTime: "1000ms" # Simple operations
stderr: "toBeEmpty"
# Complex operations
- it: "should handle complex search efficiently"
request:
jsonrpc: "2.0"
id: "search-perf-1"
method: "tools/call"
params:
name: "search_database"
arguments:
query: "performance testing"
limit: 100
expect:
response:
result:
match:partial:
results: "match:type:array"
count: "match:type:number"
performance:
maxResponseTime: "2000ms" # More time for complex ops
stderr: "toBeEmpty"Error Handling Performance
Error responses should often be faster than successful operations:
- it: "should handle errors quickly"
request:
jsonrpc: "2.0"
id: "error-perf-1"
method: "tools/call"
params:
name: "read_file"
arguments:
path: "./nonexistent.txt"
expect:
response:
result:
content:
- type: "text"
text: "match:contains:not found"
isError: true
performance:
maxResponseTime: "800ms" # Errors should be fast
stderr: "toBeEmpty"Timing Format
Performance assertions use milliseconds with the ms suffix:
| Format | Description | Use Case |
|---|---|---|
"100ms" | Very strict requirement | Critical performance paths |
"500ms" | Fast operations | Tool listing, metadata |
"1000ms" | Standard operations | File I/O, simple processing |
"2000ms" | Complex operations | Search, computation, API calls |
"5000ms" | Heavy operations | Database queries, large files |
Viewing Performance Results
Use the --timing flag to see actual response times:
# Run tests with timing information
aegis "tests/*.yml" --config config.json --timing
# Example output with performance measurements:
# ● should list tools within reasonable time ... ✓ PASS (23ms)
# ● should read small file quickly ... ✓ PASS (156ms)
# ● should handle errors quickly ... ✓ PASS (45ms)Combining Performance with Pattern Matching
Performance assertions work seamlessly with all pattern matching features:
- it: "should search with good performance and validate structure"
request:
jsonrpc: "2.0"
id: "complex-perf-1"
method: "tools/call"
params:
name: "search_tools"
arguments:
category: "documentation"
expect:
response:
result:
# Complex pattern matching
tools:
match:arrayElements:
name: "match:type:string"
description: "match:regex:.{20,}"
category: "documentation"
count: "match:type:number"
# Field extraction validation
match:extractField: "tools.*.name"
value: "match:arrayContains:search_docs"
performance:
maxResponseTime: "1500ms" # Performance requirement
stderr: "toBeEmpty"SLA Validation Examples
Use performance testing to validate service level agreements:
description: "SLA validation for production MCP server"
tests:
# 95th percentile requirement: Tool listing under 200ms
- it: "should meet tool listing SLA"
request:
jsonrpc: "2.0"
id: "sla-list-1"
method: "tools/list"
params: {}
expect:
response:
result:
tools: "match:type:array"
performance:
maxResponseTime: "200ms"
stderr: "toBeEmpty"
# 99th percentile requirement: Tool execution under 2 seconds
- it: "should meet tool execution SLA"
request:
jsonrpc: "2.0"
id: "sla-exec-1"
method: "tools/call"
params:
name: "get_user_profile"
arguments:
user_id: "test-user-123"
expect:
response:
result:
match:partial:
user: "match:type:object"
profile: "match:type:object"
performance:
maxResponseTime: "2000ms"
stderr: "toBeEmpty"Best Practices
Set Realistic Timeouts
- Tool Listing: 200-500ms (metadata operations should be fast)
- Simple Operations: 500-1000ms (file reads, basic processing)
- Complex Operations: 1000-2000ms (searches, computations)
- Heavy Operations: 2000-5000ms (database queries, large files)
- Network Operations: Consider network latency and add appropriate margins
Performance Testing Strategy
- Baseline Tests: Create performance tests for core functionality
- Regression Prevention: Run performance tests in CI/CD pipelines
- Load Conditions: Test under different load conditions
- Error Scenarios: Validate that errors are handled quickly
- Consistency: Test the same operations multiple times for consistency
CI/CD Integration
Performance tests integrate seamlessly with continuous integration:
# GitHub Actions example
name: MCP Server Performance Tests
on: [push, pull_request]
jobs:
performance:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-node@v3
- run: npm install -g mcp-aegis
- run: aegis "tests/performance/*.yml" --config config.json --jsonTroubleshooting Performance Issues
When Performance Tests Fail
- Check Actual Times: Use
--timingto see real response times - Environment Factors: Consider system load, network conditions
- Adjust Expectations: Timeouts may need adjustment based on hardware
- Profile Code: Use server-side profiling to identify bottlenecks
- Test Consistency: Run multiple times to check for variability
Debugging Slow Operations
# Run with debug and timing to see detailed communication
aegis "tests/performance.yml" --config config.json --debug --timing
# Use verbose output to understand test execution
aegis "tests/performance.yml" --config config.json --verbose --timingComplete Examples
Here's a comprehensive performance test file:
description: "Performance tests for filesystem server"
tests:
- it: "should list tools within reasonable time"
request:
jsonrpc: "2.0"
id: "perf-list-1"
method: "tools/list"
params: {}
expect:
response:
result:
tools: "match:type:array"
performance:
maxResponseTime: "500ms"
stderr: "toBeEmpty"
- it: "should read small file quickly"
request:
jsonrpc: "2.0"
id: "perf-read-1"
method: "tools/call"
params:
name: "read_file"
arguments:
path: "./data/hello.txt"
expect:
response:
result:
content:
- type: "text"
text: "Hello, MCP Aegis!"
isError: false
performance:
maxResponseTime: "1000ms"
stderr: "toBeEmpty"
- it: "should handle errors quickly"
request:
jsonrpc: "2.0"
id: "perf-error-1"
method: "tools/call"
params:
name: "read_file"
arguments:
path: "./nonexistent.txt"
expect:
response:
result:
isError: true
content:
- type: "text"
text: "match:contains:ENOENT"
performance:
maxResponseTime: "800ms"
stderr: "toBeEmpty"For more examples, see the filesystem-performance.test.mcp.yml file in the examples directory.