OpenAI Integration

Lucidic makes it easy to automatically track OpenAI completions as part of your agent’s behavior — no code changes required.

How It Works

When you set:
lai.init(..., providers=["openai"])
Lucidic will:
  • Instrument the OpenAI client using OpenTelemetry
  • Automatically log each LLM call as an Event
  • Attach the call to the currently active Step
This means:
  • Automatic Event Creation: Every OpenAI API call is automatically captured as an event - no manual create_event() or end_event() needed
  • You get full observability into prompt, model, cost, result, and more — out of the box
  • Works with both sync and async OpenAI client methods
  • Even if you forget to create a step, Lucidic will create one automatically when the LLM call happens

What Gets Captured

We automatically capture the following from OpenAI API calls:
  • Input: your messages/prompt to OpenAI
  • Model: the model name (e.g. gpt-4, gpt-3.5-turbo, gpt-4o)
  • Output: OpenAI response (including streaming responses)
  • Token usage: input and output tokens
  • Cost: calculated based on token usage and model pricing
  • Timing: duration of the API call
  • Images: when using vision models

Why This Matters

LLM calls are a core part of most agent workflows — but without visibility, it’s impossible to debug or optimize:
  • Which call caused the failure?
  • Which step was it part of?
  • How much did it cost?
  • What was the actual response?
This integration gives you full transparency, with zero boilerplate.

Example

import lucidicai as lai
from openai import OpenAI

# Just initialize - that's it!
lai.init(providers=["openai"])

client = OpenAI()

# Steps and events are automatically created
response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Hello!"}]
)

# Session auto-ends when script exits

Streaming Example

import lucidicai as lai
from openai import OpenAI

lai.init(providers=["openai"])

client = OpenAI()

# Streaming responses are also tracked automatically
stream = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Tell me a story"}],
    stream=True
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")

Explicit Step Management

import lucidicai as lai
from openai import OpenAI

lai.init(
    session_name="chatbot_run",
    providers=["openai"]  # API key and agent ID from env vars
)

client = OpenAI()

lai.create_step(state="Research", goal="Generate research questions")

# All LLM calls are automatically tracked as events within this step
response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "What are key areas in AI safety research?"}]
)

response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "What are the main challenges in each area?"}]
)

lai.end_step()

Notes

  • All OpenAI client methods are instrumented (chat completions, embeddings, etc.)
  • Both sync and async methods are supported
  • If no step exists when an LLM call is made, Lucidic automatically creates one
  • You can always add custom events using create_event() for additional context
  • Works with the latest OpenAI Python SDK (v1.0+)