Testing Platform

Testing Platform

Validate your AI agents with automated test suites. Define test cases with expected inputs and outputs, configure environments, and track results over time.

Core Concepts

Test Cases

Define individual test scenarios with input messages and expected response criteria. Group related cases into test suites.

Test Environments

Configure isolated environments with specific agent versions and settings for consistent, reproducible testing.

Test Runs

Execute test suites against specific deployments. View pass/fail results, response times, and detailed output comparisons.

Deployments

Track agent deployments across environments. Link test runs to specific deployment versions for traceability.

API Examples

Create a Test Case

curl -X POST https://api.smoo.ai/organizations/{org_id}/test-cases \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Greeting Test",
    "description": "Verify the agent responds with a friendly greeting",
    "input": "Hello, I need help with my order",
    "expectedOutput": "Contains a greeting and asks for order details",
    "agentId": "agent_id"
  }'

Create a Test Environment

curl -X POST https://api.smoo.ai/organizations/{org_id}/test-environments \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Staging",
    "description": "Pre-production testing environment"
  }'

Start a Test Run

curl -X POST https://api.smoo.ai/organizations/{org_id}/test-runs \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "testEnvironmentId": "env_id",
    "deploymentId": "deployment_id",
    "testCaseIds": ["case_1", "case_2", "case_3"]
  }'

Get Test Run Results

curl https://api.smoo.ai/organizations/{org_id}/test-runs/{run_id} \
  -H "Authorization: Bearer YOUR_TOKEN"

Typical Workflow

  1. 1

    Define test cases

    Create test cases that cover your agent's expected behavior: greetings, FAQs, edge cases, and escalation scenarios.

  2. 2

    Set up environments

    Create test environments that mirror your deployment stages (development, staging, production).

  3. 3

    Run tests on each deployment

    Execute test suites after each deployment to catch regressions. Integrate with CI/CD for automated validation.

  4. 4

    Review results and iterate

    Analyze test run results, identify failing cases, and refine your agent's knowledge base and prompts accordingly.