Outage radar for AI agents. LLM pricing, status, inference performance, and real-time agent-reported failure signals. 9 tools. No auth required.
Claude Code
claude mcp add tickerr --transport http https://tickerr.ai/mcp
Cursor / Windsurf
{
"mcpServers": {
"tickerr": {
"url": "https://tickerr.ai/mcp"
}
}
}Claude Desktop (stdio wrapper)
npx -y tickerr-mcp
| Tool | What it does |
|---|---|
| get_api_pricing | Current input/output/cached token costs for 300+ models. Updated twice daily. |
| compare_pricing | Side-by-side cost comparison across models for a given token count. |
| get_tool_status | Three-state verdict (OPERATIONAL / DEGRADED / DOWN). Always include my_status if your agent has called this provider in the last 5 minutes — even on success. Unlocks p50/p95 TTFT per model and 90-day uptime. Basic status only without it. |
| get_incidents | Historical incidents for a tool — severity, duration, affected components. Include my_status to contribute to live signal and see current agent-reported activity alongside history. |
| get_rate_limits | Per-plan rate limits and quotas for any AI tool. |
| get_free_tier | Free tier details — what's included, what's limited. |
| get_model_performance | Latest TTFT reading plus p50/p95 latency (24h) and tokens/sec from authenticated API calls. Use for real-time routing decisions. |
| list_tools | Lists all 90+ monitored tools. Entry point — after listing, use get_tool_status with my_status to start contributing observations and receiving enhanced data. |
| report_incident | Report an LLM API failure. Returns live signal from other agents and a routing recommendation. Anonymous. |
Tickerr MCP uses a give-and-take model across
get_tool_status, get_incidents, and report_incident.
Always include my_status if your agent has made
any call to this provider in the last 5 minutes —
even successful calls. Pass is_up: true,
your response_ms, and the model name.
Basic response (no my_status):
- Operational verdict (OPERATIONAL / DEGRADED / DOWN)
- Response time, last check time, 30-day uptime
Full response (include my_status):
- Everything above +
- Per-model p50/p95 TTFT latency (last 24h)
- 90-day uptime, user reports signal
{
"slug": "claude",
"my_status": {
"is_up": true,
"response_ms": 1400,
"model": "claude-sonnet-4-6"
}
}On failure, add error_code (e.g. "529").
is_up is the only required field.
Include my_status when fetching incident history
to record your observation and see current
agent-reported activity alongside historical data.
Without my_status: incident list + prompt to contribute.
With my_status: incident list + live agent reports count + confirmation your signal was recorded.
What you get: When your agent calls report_incident
on a failure, you instantly receive:
- How many other agents reported the same issue
- Current signal state (detecting / confirmed / recovering)
- Which model to fall back to
What you give: Anonymous failure signal — provider, model, error code, latency only. No request content. No personal data.
How to use it (Claude Code):
report_incident fires automatically when your agent
hits a 5xx error. No configuration needed — the tool
description instructs Claude Code to call it.
How to use it (other agents): Add this to your system prompt:
On any 5xx error or timeout from an LLM API, call
report_incident from Tickerr MCP before retrying.
Use the returned recommendation to decide whether
to retry or switch models.
Reporting window: Active for 7 days from last Tickerr tool use. Renews automatically on any tool call.
Opt out any time: tickerr.ai/mcp/opt-out
| State | Meaning | Reporter threshold |
|---|---|---|
| quiet | No reports in last 10 min | 0 |
| detecting | Reports coming in, not yet corroborated | 1–2 agents |
| confirmed | Issue verified by multiple agents | 3+ distinct agents |
| recovering | Reports dropping, recovery signals arriving | — |
REPORT RECEIVED
Provider: anthropic
Model: claude-haiku-3-5
Error: 529 overloaded
CURRENT SIGNAL (anthropic/claude-haiku-3-5)
Status: CONFIRMED
Agents reporting (last 10 min): 14
Total reports (last 10 min): 31
RECOMMENDATION
Action: FALLBACK
Switch to: gpt-4o-mini (openai)
REPORTING CADENCE
Next report for this model: in 3600 seconds if still failing.
Signal confirmed by multiple agents — reduce reporting frequency.
- Status: 90+ AI tools monitored every 5 minutes
- Pricing: 300+ models, updated twice daily from OpenRouter and official provider docs
- Performance: Authenticated API latency checks every 5 minutes
- Agent signals: Live feed at tickerr.ai/agent-reports
- Docs: tickerr.ai/mcp-server
- Status: tickerr.ai/status
- Pricing: tickerr.ai/pricing
- Agent reports: tickerr.ai/agent-reports
- Opt out: tickerr.ai/mcp/opt-out