Anthropic Integration
Lucidic supports automatic tracking of Anthropic Claude completions in your agent sessions — with no extra instrumentation needed.How It Works
When you initialize the SDK with:- Instrument the Anthropic client using OpenTelemetry
- Automatically create events for each Claude completion - no manual event creation needed
- Attach the LLM call to the current active Step (or auto-create one if none exists)
- Support both sync and async operations
What Gets Captured
We automatically capture the following from Anthropic API calls:- Input: your messages/prompt to Claude
- Model: the Claude model used (e.g.
claude-3-opus-20240229
,claude-3-5-sonnet-20241022
) - Output: the Claude response (including streaming)
- Token usage: input and output tokens
- Cost: calculated based on token usage and model pricing
- Timing: duration of the API call
- Images: when using vision capabilities
Why This Matters
LLM calls are a core part of most agent workflows — but without visibility, it’s impossible to debug or optimize:- Which call caused the failure?
- Which step was it part of?
- How much did it cost?
- What was the actual response?
Example
Streaming Example
Explicit Step Management
Notes
- Requires
anthropic
Python SDK installed (pip install anthropic
) - All Anthropic client methods are instrumented
- Both sync and async methods are supported
- If no step exists when an LLM call is made, Lucidic automatically creates one
- You can still manually use
create_event()
for additional context - Works with the latest Anthropic Python SDK