Complete observability for every agent run.

Monitor token usage, latency, cost, and success rate across all your agents and workflows in real time. Drill into distributed traces to understand exactly what happened.

Observe

Every execution is automatically instrumented. Metrics, traces, and logs are captured without any SDK changes or configuration.

successdata-processor3.1s
successreport-agent4.8s
failedfraud-detector1.2s
successdata-processor2.9s

Analyze

Charts for operations, token usage, latency, cost, success rate, and model usage. Filterable by time range and individual agent.

Operations
1,482
Success
96.2%
Avg latency
412ms
LLM cost
$2.47
Token usage · 24h

Optimize

Drill into traces to find bottlenecks, compare token costs across models, and surface failing patterns before they become incidents.

agent.run
4.2s
llm.complete
2.4s
tool.execute
1.3s
llm.complete
1.1s

Every metric, live and sliceable.

A unified dashboard tracks operations, token usage, latency, cost, and success rate over time. Filter by any time range, from the last 5 minutes to 30 days and drill down to a single agent.

Operations over time with success, failed, and running breakdown
Stacked token charts separating input and output usage
LLM cost breakdown by model over time
Success rate as a running average with trend line
Latency tracking per agent
Tool call frequency and LLM model usage charts
1h6h1d7d30d
Operations
1,482
last 24h
Success rate
96.2%
running avg
Avg latency
412ms
p50
LLM cost
$2.47
last 24h
Token usage
Input Output
00:0006:0012:0018:00now
Success1,427
Failed55
Running3

See inside every execution.

Every agent run produces a full distributed trace broken into spans: LLM completions, tool calls, and subagent invocations. Identify where time is spent, which tools are slow, and exactly how many tokens each step consumed.

Full trace waterfall for every execution, automatically captured
Span level breakdown: agent.run, llm.complete, tool.execute
Token counts and latency per span
Status tracking: running, completed, failed — per execution
Searchable and filterable trace history
Execution logs with structured output per step
exec_7f3a9bcompleted4.2s · 1,240 tok · $0.002
agent.run
4,210ms
llm.complete
2,380ms840 tok
tool.execute: query_db
1,290ms
tool.execute: fmt_output
790ms
llm.complete (2nd pass)
1,090ms400 tok
4.2s
Total latency
840
Input tokens
400
Output tokens
$0.002
LLM cost

Query analytics from your terminal

Aggregate stats, inspect individual traces, and filter for failures, all without leaving the CLI.

daita cli
# Aggregate stats across all agents
$ daita operations stats --period 24h
Executions: 1,482 • Avg latency: 3.1s • Errors: 0.4%
Tokens in: 2.1M • Tokens out: 640K • Cost: $4.21
 
# Filter by agent and look for failures
$ daita operations stats --agent data-processor --status failed
6 failed executions in the last 24h
Most common error: tool.execute timeout (4×)
 
# Inspect a specific execution trace
$ daita traces view exec_7f3a9b
agent.run ████████████████ 4.2s
llm.complete ████████ 2.4s • 1,240 tok
tool.execute ████ 1.3s • query_db
llm.complete ████ 1.1s • 480 tok
daita operations statsdaita executions listdaita traces viewdaita operations logsdaita agents usage

Ready to see inside your agents?

Start monitoring in minutes. No instrumentation required.