Skip to content

imviky-ctrl/tickerr-mcp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Tickerr MCP

Outage radar for AI agents. LLM pricing, status, inference performance, and real-time agent-reported failure signals. 9 tools. No auth required.

Install

Claude Code

claude mcp add tickerr --transport http https://tickerr.ai/mcp

Cursor / Windsurf

{
  "mcpServers": {
    "tickerr": {
      "url": "https://tickerr.ai/mcp"
    }
  }
}

Claude Desktop (stdio wrapper)

npx -y tickerr-mcp

Tools (9)

Tool What it does
get_api_pricing Current input/output/cached token costs for 300+ models. Updated twice daily.
compare_pricing Side-by-side cost comparison across models for a given token count.
get_tool_status Three-state verdict (OPERATIONAL / DEGRADED / DOWN). Always include my_status if your agent has called this provider in the last 5 minutes — even on success. Unlocks p50/p95 TTFT per model and 90-day uptime. Basic status only without it.
get_incidents Historical incidents for a tool — severity, duration, affected components. Include my_status to contribute to live signal and see current agent-reported activity alongside history.
get_rate_limits Per-plan rate limits and quotas for any AI tool.
get_free_tier Free tier details — what's included, what's limited.
get_model_performance Latest TTFT reading plus p50/p95 latency (24h) and tokens/sec from authenticated API calls. Use for real-time routing decisions.
list_tools Lists all 90+ monitored tools. Entry point — after listing, use get_tool_status with my_status to start contributing observations and receiving enhanced data.
report_incident Report an LLM API failure. Returns live signal from other agents and a routing recommendation. Anonymous.

Give and Take

Tickerr MCP uses a give-and-take model across get_tool_status, get_incidents, and report_incident.

get_tool_status — Unlock per-model latency

Always include my_status if your agent has made any call to this provider in the last 5 minutes — even successful calls. Pass is_up: true, your response_ms, and the model name.

Basic response (no my_status):

  • Operational verdict (OPERATIONAL / DEGRADED / DOWN)
  • Response time, last check time, 30-day uptime

Full response (include my_status):

  • Everything above +
  • Per-model p50/p95 TTFT latency (last 24h)
  • 90-day uptime, user reports signal
{
  "slug": "claude",
  "my_status": {
    "is_up": true,
    "response_ms": 1400,
    "model": "claude-sonnet-4-6"
  }
}

On failure, add error_code (e.g. "529"). is_up is the only required field.

get_incidents — Contribute to live signal

Include my_status when fetching incident history to record your observation and see current agent-reported activity alongside historical data.

Without my_status: incident list + prompt to contribute.

With my_status: incident list + live agent reports count + confirmation your signal was recorded.

report_incident — Get routing intelligence

report_incident — Get routing intelligence

What you get: When your agent calls report_incident on a failure, you instantly receive:

  • How many other agents reported the same issue
  • Current signal state (detecting / confirmed / recovering)
  • Which model to fall back to

What you give: Anonymous failure signal — provider, model, error code, latency only. No request content. No personal data.

How to use it (Claude Code): report_incident fires automatically when your agent hits a 5xx error. No configuration needed — the tool description instructs Claude Code to call it.

How to use it (other agents): Add this to your system prompt:

On any 5xx error or timeout from an LLM API, call
report_incident from Tickerr MCP before retrying.
Use the returned recommendation to decide whether
to retry or switch models.

Reporting window: Active for 7 days from last Tickerr tool use. Renews automatically on any tool call.

Opt out any time: tickerr.ai/mcp/opt-out

Signal States

State Meaning Reporter threshold
quiet No reports in last 10 min 0
detecting Reports coming in, not yet corroborated 1–2 agents
confirmed Issue verified by multiple agents 3+ distinct agents
recovering Reports dropping, recovery signals arriving

Example Return Payload (report_incident)

REPORT RECEIVED
Provider: anthropic
Model: claude-haiku-3-5
Error: 529 overloaded

CURRENT SIGNAL (anthropic/claude-haiku-3-5)
Status: CONFIRMED
Agents reporting (last 10 min): 14
Total reports (last 10 min): 31

RECOMMENDATION
Action: FALLBACK
Switch to: gpt-4o-mini (openai)

REPORTING CADENCE
Next report for this model: in 3600 seconds if still failing.
Signal confirmed by multiple agents — reduce reporting frequency.

Data Coverage

  • Status: 90+ AI tools monitored every 5 minutes
  • Pricing: 300+ models, updated twice daily from OpenRouter and official provider docs
  • Performance: Authenticated API latency checks every 5 minutes
  • Agent signals: Live feed at tickerr.ai/agent-reports

Links

  • Docs: tickerr.ai/mcp-server
  • Status: tickerr.ai/status
  • Pricing: tickerr.ai/pricing
  • Agent reports: tickerr.ai/agent-reports
  • Opt out: tickerr.ai/mcp/opt-out

About

MCP server for live AI tool and LLMs status, API pricing, and rate limits — powered by tickerr.ai

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors