🧊 Open source LLM observability platform. One line of code to monitor, evaluate, and experiment. YC W23 🍓
-
Updated
May 5, 2026 - TypeScript
🧊 Open source LLM observability platform. One line of code to monitor, evaluate, and experiment. YC W23 🍓
Fastest enterprise AI gateway (50x faster than LiteLLM) with adaptive load balancer, cluster mode, guardrails, 1000+ models support & <100 µs overhead at 5k RPS.
One API for 30+ LLMs, OpenAI, Anthropic, Bedrock, Azure. Caching, guardrails & cost controls. Go-native LiteLLM & Kong AI Gateway alternative.
Nadir is a Python package designed to dynamically choose the best llm for your prompt by balancing complexity and cost and response time.
Stop overpaying to run your agents. Kalibr routes every request to lower-cost model and tool paths without degrading performance.
Open-source, FOCUS-aligned FinOps knowledge skill and mcp for AI coding assistants. 28 reference files spanning cloud cost (AWS/Azure/GCP/OCI), AI inference economics, Kubernetes, data platforms, allocation, chargeback, anomaly management, waste detection, GreenOps. Installs into 11 AI tools. Refreshed bi-monthly. Built by OptimNow.
Rails-native LLM cost ledger: track spend by provider, model, and feature with self-hosted storage and budget guardrails.
Know what your AI agents cost. API gateway with budget enforcement, session tracking, and MCP tools.
Tools, libraries, papers, and patterns for reducing the cost of running large language models in production.
A curated list of strategies, tools, papers, and resources for reducing LLM token costs and improving efficiency in production.
Open-source AI + data cost intelligence — 18 connectors (Claude, GPT, Gemini, dbt, warehouses, BI, cloud, CI/CD), cache-tier visibility, anomaly detection, MIT licensed
Track, visualize, and optimize LLM API spending. Monitor OpenAI & Anthropic costs per feature, detect waste, suggest savings. Zero-config Python profiler.
An LLM Cost Calculator for all the major services
Cut your OpenClaw / ZeroClaw token bill. Find which model earns its cost. Prove whether optimizations actually work. Local, no upload.
Local-first observability for Claude Code - drill into costs, prompts, and tool calls turn by turn. Zero instrumentation.
Your Claude Code transcripts as SQL. Local DuckDB + agent skills that turn it into investigation playbooks
VS Code extension + MCP server that validates prompts, routes to the cheapest LLM, and projects token × turn cost before the call. Cuts Copilot premium-request burn on agent loops.
Your AI agents are burning money. AImeter shows you exactly how much.
Just like synapses optimize neural transmission with precise weights, Synapse TOON optimizes your API payloads with precision encoding. 30-60% fewer tokens, neural-grade efficiency.
Add a description, image, and links to the llm-cost topic page so that developers can more easily learn about it.
To associate your repository with the llm-cost topic, visit your repo's landing page and select "manage topics."