A Router is the central orchestration unit in Multi-Router. Each router has a ruleset that defines how data is classified and enriched using LLM analysis and if/then rules. Routers serve as the scope for chat completion requests and data evaluation.
Each router has a unique ID prefixed with rt_ and belongs to a single account. When you make a chat completion request, it's routed through a specific router. When you evaluate data, the router's ruleset determines the output.
Router-scoped API keys
You can create API keys scoped to a specific router. This lets you distribute keys to applications without exposing your full account. Router-scoped keys automatically route requests to the correct router without needing the X-Mr-Router-Id header.
Creating a Router
Create a router by sending a POST request with a name. The router is created in an active state and is immediately ready to accept a ruleset and handle requests.
Each router has a single Ruleset — a YAML document that defines how data is classified and enriched. Rulesets let AI agents and services ask "what should I do with this?" by sending a flat key-value data map and getting back an enriched output map — without the calling service needing to know the rules themselves.
Every ruleset has an llm block that calls an LLM to analyze content, followed by if/then rules that act on the LLM's output. An optional finally clause provides fallback or post-processing behavior. Rules can be changed at any time without redeploying the calling service.
Ruleset Format
A ruleset is a YAML document with these top-level fields:
version — must be 2
mode — first-match or all
llm — LLM block that analyzes input data (required)
rules — an ordered list of if/then rules (optional)
finally — fallback or post-processing assignments (optional)
Example: email classifier with LLMyaml
version: 2
mode: first-match
llm:
model: anthropic/claude-haiku-4-5
prompt: |
You are an email triage assistant.
output:
category: one of spam, newsletter, transactional, personal, business
urgency: integer from 1 to 10
summary: one sentence summary
rules:
- if: category == "spam"
then: action = "junk"
- if: urgency > 8
then: action = "escalate"
- if: category == "newsletter"
then: action = "archive", label = "newsletter"
finally: action = "inbox"
Rule Structure
Each rule has up to three fields:
name — optional identifier, returned in the verdict as matched_rule
if — conditions that must be true for the rule to match (required)
then — key-value pairs merged onto the output map (required)
Rules evaluate top-to-bottom in the order they appear in the YAML. No priority field needed. For catch-all or default behavior, use the top-level finally clause instead of a rule without an if.
If Clause
The if clause is a series of conditions joined by and. Each condition compares a data field against a value:
If clause examplesyaml
# Simple equality (case-insensitive)
if: category == "spam"
# Numeric comparison
if: score > 80
# Multiple conditions (all must be true)
if: score > 80 and category == "vip"
# Range check
if: score >= 25 and score < 50
# Not equal
if: language != "en"
Supported operators: ==!=><>=<=
String comparisons (== and !=) are case-insensitive
If both sides parse as numbers, the comparison is numeric
A missing key never matches any condition
Then Clause
The then clause is comma-separated key=value assignments that are merged onto the output map:
Then clause examplesyaml
# Literal values (quoted)
then: action = "escalate", priority = "high"
# Reference a data map key (unquoted) — copies the value
then: action = "deliver", out_summary = summary
# Clear a key (set to empty string)
then: important = "false", summary = ""
Output map construction
The output map starts as the full merged data (client input + LLM output). The matching rule's then values are merged on top — overriding specific keys while everything else passes through unchanged. You don't need to explicitly copy fields you want to keep.
Evaluation Modes
The mode field controls how rules are evaluated:
first-match — stops at the first matching rule. That rule's then values are merged onto the output map.
all — evaluates every rule. All matching rules' then values are merged onto the output map in order. Later rules overwrite earlier ones on key conflicts.
first-match: pick one actionyaml
version: 2
mode: first-match
llm:
model: anthropic/claude-haiku-4-5
prompt: |
Analyze the content type and determine the best action.
output:
content_type: the MIME type of the content
rules:
- if: content_type == "application/pdf"
then: model = "anthropic/claude-haiku-4-5"
finally: action = "skip"
all: accumulate multiple tagsyaml
version: 2
mode: all
llm:
model: anthropic/claude-haiku-4-5
prompt: |
Analyze this support ticket.
output:
sentiment: one of positive, negative, neutral
topic: one of billing, technical, account, other
language: ISO 639-1 language code
rules:
- name: negative
if: sentiment == "negative"
then: escalate = "true"
- name: billing
if: topic == "billing"
then: team = "finance"
- name: technical
if: topic == "technical"
then: team = "engineering"
- name: non-english
if: language != "en"
then: needs_translation = "true"
Verdicts
When you evaluate data against a router's ruleset, the response contains a verdict object alongside the raw LLM provider response. The verdict contains:
output — the full merged output map (client input + LLM output + rule overrides)
matched_rule — name of the matched rule (first-match mode)
matched_rules — list of matched rule names (all mode)
total_rules — total number of rules in the ruleset
elapsed_us — rule evaluation time in microseconds
The full LLM provider JSON response (chat completion format) is returned at the top level alongside the verdict. This gives you access to both the structured rule output and the raw LLM response including usage, model info, and choices.
The llm block is required and calls an LLM before rules evaluate. The LLM analyzes the input data and returns structured output that gets merged into the data map for rules to act on.
model — the model to use, in provider/model format (required)
prompt — system prompt in plain english (required)
output — map of key names to descriptions; the engine auto-generates the JSON format instruction (required)
fallback — ordered list of backup models if the primary fails (optional)
max_tokens — maximum tokens for the response (optional, defaults to 4096)
Attachment classifier with LLMyaml
version: 2
mode: first-match
llm:
model: anthropic/claude-haiku-4-5
prompt: |
You are an email attachment classifier. Determine if this is a
meaningful document worth saving (receipt, contract, report)
versus email infrastructure (calendar invites, XML logs, signatures).
output:
importance_score: integer from 0 to 100
category: one of document, receipt, photo, calendar, log, signature, other
summary: one sentence description of the attachment
rules:
- if: importance_score >= 50
then: important = "true"
- if: importance_score >= 25
then: important = "maybe"
finally: important = "false", summary = ""
How it works: The engine sends the user's prompt plus auto-generated JSON format instructions to the LLM. The client's input data is sent as the user message. The LLM response is parsed as JSON, and each key from output is merged into the data map. Rules then evaluate against the merged map.
Image support: Pass images to the LLM using reserved keys in the data map: _image_url (image URL) or _image_base64 (base64 data). These are sent as image content blocks to vision-capable models and stripped from the output.
LLM block always runs
The LLM block runs for every evaluation — it cannot be conditional. If you need conditional model routing (e.g., "use haiku for PDFs, skip for everything else"), use rules to determine the action based on the LLM's output and let the caller handle the model call based on the verdict.
Finally Clause
The optional top-level finally clause uses the same syntax as a then clause — comma-separated key=value assignments. Its behavior depends on the mode:
first-match — finally runs only when no rule matched (acts as a fallback/default).
all — finallyalways runs after all rules, even when rules matched (acts as a guaranteed post-processing step).
finally as fallback (first-match)yaml
version: 2
mode: first-match
llm:
model: anthropic/claude-haiku-4-5
prompt: |
Classify this support ticket.
output:
priority: one of critical, high, normal, low
rules:
- name: critical
if: priority == "critical"
then: action = "page-oncall", team = "platform"
- name: high
if: priority == "high"
then: action = "escalate", team = "support-leads"
finally: action = "queue", team = "general-support"
finally as post-processing (all mode)yaml
version: 2
mode: all
llm:
model: anthropic/claude-haiku-4-5
prompt: |
Analyze this document.
output:
language: ISO 639-1 language code
topic: the main topic
rules:
- name: non-english
if: language != "en"
then: needs_translation = "true"
finally: processed = "true", timestamp = ""
Ruleset Lifecycle
Each router has a single ruleset. Create or update it with a PUT request — the new YAML replaces the previous definition entirely. Retrieve the current ruleset with a GET request.
To evaluate data against the ruleset, POST to the router's evaluate endpoint. The ruleset is always active as long as it exists on the router.
Lifecycle
Routers have two states: active and disabled. An active router processes requests normally. A disabled router rejects all incoming requests.
To delete a router, you must first deactivate it. This two-step process prevents accidental deletion of routers that are actively serving traffic.
The chat completions endpoint is OpenAI-compatible. Send messages to any supported LLM provider using the provider/model format. Supports both synchronous JSON responses and Server-Sent Events (SSE) streaming.
POST/v1/chat/completionsCreate a chat completion
Chat completion requestbash
curl -X POST https://api.multi-router.ai/v1/chat/completions \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "anthropic/claude-sonnet-4-6",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is multi-router?"}
],
"stream": false,
"max_tokens": 1024
}'
Use the provider/model format for the model parameter. Multi-Router supports:
Anthropicanthropic/
OpenAIopenai/
Google Geminigemini/
Mistralmistral/
Coherecohere/
DeepSeekdeepseek/
xAIxai/
Perplexityperplexity/
Groqgroq/
Togethertogether/
Fireworksfireworks/
AWS Bedrockbedrock/
Bring Your Own Key (BYOK)
Multi-Router uses your own provider API keys. Configure your keys for each provider in the dashboard or via the provider keys API. Your keys are encrypted and stored securely in AWS Secrets Manager.
Models
The models endpoint returns the catalog of available AI models with pricing, capabilities, and provider-specific identifiers. This is a public endpoint that does not require authentication.
GET/v1/modelsList all models
GET/v1/models/:providerId/:modelIdGet a specific model
Each router has a single ruleset that defines if/then rules with a required LLM block. Rules evaluate against a flat string key-value data map and produce an output map. The LLM analyzes input data before rules evaluate.
Two evaluation modes: first-match stops at the first matching rule, while all merges all matching rules' outputs together.
PUT/v1/routers/:routerId/rulesetCreate or update the ruleset
Creates or replaces the router's ruleset. The ruleset is a YAML string with a required LLM block and if/then rules. Invalid syntax is rejected with a 400 error.
Requestbash
curl -X PUT https://api.multi-router.ai/v1/routers/rt_8f3k2m9x4n1p/ruleset \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"description": "Classify emails by category and urgency",
"ruleset": "version: 2\nmode: first-match\nllm:\n model: anthropic/claude-haiku-4-5\n prompt: Classify this email.\n output:\n category: one of spam, newsletter, other\nrules:\n - if: category == \"spam\"\n then: action = \"junk\"\nfinally: action = \"inbox\""
}'
POST/v1/routers/:routerId/evaluateEvaluate data against the router's ruleset
Evaluates a flat string key-value data map against the router's ruleset. The LLM is called first, then rules evaluate against the merged data. Returns the full LLM provider response alongside a verdict containing the rule output.
Requestbash
curl -X POST https://api.multi-router.ai/v1/routers/rt_8f3k2m9x4n1p/evaluate \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"data": {
"from": "deals@shop.com",
"subject": "50% off everything",
"text": "Big sale this weekend..."
}
}'