OpenAI Integration
Lucidic makes it easy to automatically track OpenAI completions as part of your agent’s behavior — no code changes required.How It Works
When you set:- Instrument the OpenAI client using OpenTelemetry
- Automatically log each LLM call as an Event
- Attach the call to the currently active Step
- Automatic Event Creation: Every OpenAI API call is automatically captured as an event - no manual
create_event()
orend_event()
needed - You get full observability into prompt, model, cost, result, and more — out of the box
- Works with both sync and async OpenAI client methods
- Even if you forget to create a step, Lucidic will create one automatically when the LLM call happens
What Gets Captured
We automatically capture the following from OpenAI API calls:- Input: your messages/prompt to OpenAI
- Model: the model name (e.g.
gpt-4
,gpt-3.5-turbo
,gpt-4o
) - Output: OpenAI response (including streaming responses)
- Token usage: input and output tokens
- Cost: calculated based on token usage and model pricing
- Timing: duration of the API call
- Images: when using vision models
Why This Matters
LLM calls are a core part of most agent workflows — but without visibility, it’s impossible to debug or optimize:- Which call caused the failure?
- Which step was it part of?
- How much did it cost?
- What was the actual response?
Example
Streaming Example
Explicit Step Management
Notes
- All OpenAI client methods are instrumented (chat completions, embeddings, etc.)
- Both sync and async methods are supported
- If no step exists when an LLM call is made, Lucidic automatically creates one
- You can always add custom events using
create_event()
for additional context - Works with the latest OpenAI Python SDK (v1.0+)