Scan Aggregation
How individual events become meaningful scans.
Overview
A "scan" represents a single AI interaction session - all the LLM calls that occur between user prompts. Events are grouped by conversation ID to form scans.
Aggregation Rules
Conversation-Based Grouping
Events with the same conversation ID are grouped into a single scan:
User Prompt → LLM Call 1 → Tool Use → LLM Call 2 → Response
└────────────── One Scan ──────────────┘
Multi-Event Scans
A single scan can contain multiple events of different types:
Scan A (conversation: abc123):
- after_response (tokens: 500)
- after_file_edit
- after_response (tokens: 750)
- after_shell
Total: 1250 tokens, 2 LLM calls, 2 tool calls
Cross-Tool Sessions
Events from different tools always create separate scans, even if timestamps overlap:
Event 1: cursor, conversation: abc → Scan A
Event 2: claude, conversation: xyz → Scan B
Metrics
Each scan tracks:
- LLM Calls - Count of response events (
after_response,after_model, etc.) - Tool Calls - Count of tool events (
after_file_edit,after_shell, etc.) - Token Usage - Input, output, and thinking tokens aggregated across all events
- Action Counts - Breakdown of edits, reads, shell commands, and failures