See What Your AI Agents Actually Do

A drop-in proxy between your apps and LLM providers. Full visibility into costs, tool calls, and agent behavior. Zero code changes — just change the base URL.

What You Get

Everything QA and dev teams need to understand AI agent behavior

Real-Time Cost Tracking

Spending breakdown by provider, model, and session. Know exactly where your AI budget goes — before it spirals.

Session Intelligence

Automatically correlates related API calls into logical sessions using conversation fingerprinting and tool-call linking.

Tool/Agent Observability

Visualize complete agentic workflows: user prompt, LLM response, tool call, tool result. Debug broken chains instantly.

QA Verdicts

Automated PASS / REVIEW / FAIL checks per session. QA engineers see health status first, details second.

Story View

Each session rendered as a step-by-step narrative timeline. Readable conversations, not JSON blobs.

One-Click Reports

Generate clean bug reports from any session. Copy as plain text for tickets, or export raw JSON for developers.

How It Works

One config change. Full observability.

1

Point Your SDK

Change your LLM SDK's base URL to the Inspector proxy. No code changes, no SDK wrappers.

# OpenAI Python SDK client = OpenAI( base_url="http://inspector:8080" )
2

Proxy Captures Everything

Every request and response is traced and stored. Streaming supported with zero added latency.

AppInspectorOpenAItraces saved
3

Open the Dashboard

Sessions, verdicts, conversation flows, costs — all visible immediately. Zero configuration.

# Dashboard ready at http://inspector:8080 Sessions: 47 Total cost: $12.84

The Problem We Solve

AI agent traces are opaque. We make them readable.

Without Inspector
With Inspector
Agent traces are opaque JSON blobs
Readable conversation timelines
No quick way to assess session health
Instant PASS / REVIEW / FAIL verdicts
Hard to track tool call flows
Visual tool usage with arguments & results
Costs scattered across providers
Unified cost dashboard by model and session
Bug reports need manual trace assembly
One-click report generation

Who It's For

Built for everyone who needs to understand what AI agents are doing

QA Engineers

Verify agent behavior, spot failures, generate bug reports. No developer skills needed — the story view speaks plain language.

Developers

Debug agent execution flows, trace tool interactions, inspect raw payloads. Full request/response detail for every API call.

Product & Management

Track LLM costs, performance, and reliability across sessions. Know what your AI spend buys — and where to optimize.

Stop Guessing What Your AI Does

Drop-in proxy. Full visibility. Zero code changes.