Using the Python SDK
Once you’ve installed Lucidic and connected it to the dashboard, you’re ready to start recording your agent’s behavior, step by step. This guide walks you through how to initialize a session, track steps and events, and optionally pull prompts from our Prompt DB.Step 1: Create a Session
Theere are 4 things you need to provide to initialize a session:session_name
: A name for your session #Whatever you want to name itapi_key
: Your Lucidic API key #You should have this copied somewhere safeagent_id
: Your Lucidic Agent ID #You can get this from the dashboardprovider
: The LLM provider you’re using #OpenAI, Anthropic, PydanticAI, or LangChain
LUCIDIC_API_KEY
and LUCIDIC_AGENT_ID
to environment variables. The way to do this is create a .env
file in the root of your project and add the following. Most of you will already have a .env (it is where you would keep any API key) and will just have to add a Lucidic API Key and a Lucidic Agent ID.
Step 2: Create Steps
Usecreate_step()
and end_step()
around your step logic. Each step should represent a meaningful action or decision point.
Only one step can be active at a time, so be sure to end a step before creating a new one.
Optional Fields You Can Attach to a Step
You can enrich any step with the following optional fields to improve debugging, evaluation, and traceability:state
– A short description of the current environment or UI (e.g., page title, visible content, or system state).action
– What the agent did in this step (e.g., “clicked submit”, “filled out form”).goal
– What the agent intended to accomplish (e.g., “navigate to checkout”, “extract user name”).eval_score
– A step-level rating (like"5"
or"pass"
) to assess performance at that point.eval_description
– An explanation justifying theeval_score
, useful for audits or structured reviews.screenshot
orscreenshot_path
– Visual context for this step, either as a base64-encoded image or a file path.
Step Updates
You can update steps mid-run or retroactively:Step 3: Create Events
If your agent has tool calls or LLM usage, track them as events within each step.If you only have LLM calls with a supported provider you don’t need to create events for them! Lucidic will automatically create events for you.
All of these are optional, but we recommend setting at leastdescription
andresult
.
description
– A detailed explanation of the event (e.g., “clicked submit”, “completed form entry”), or any relevant input, context, or memory for your agent. This field is often used to capture user actions, LLM prompts, document processing inputs, or data retrieval details. Include any information you consider important for your agent.is_successful
– A boolean indicating whether the event was successful.result
– This is more of a result/output of the event. For example, if you are doing a document processing, this could be the result of the document processing. Or if you make an API call, this could be the result of the API call.is_finished
– A boolean indicating whether the event was finished. This will automatically be set to true when the event is ended.model
– A lot of people use events with LLMs so if you have a custom LLM call you can set the model name here.cost
– If you have a custom API call or LLM call you can set the cost here.
LLM calls from supported providers like OpenAI and Anthropic are automatically captured into events when you set theprovider
ininit()
.
Step 4: (Optional) Pull Prompts from Prompt DB
Use Lucidic’s Prompt DB to version and serve prompts dynamically:prompt_name
: Prompt namelabel
: Label to fetch (defaults toproduction
)variables
: Dictionary of string key/values to interpolatecache_ttl
:0
= no cache-1
= cache forevern
= cache forn
seconds
{{variable}}
placeholders will be replaced using your dictionary- Missing keys will raise an error
- Unreplaced variables will trigger a warning
- Read more at Prompt DB
Step 5: End the Session
Your session will automatically end when the script exits (via anatexit
handler), but you can also end it manually if you would like to add in evals:
Custom Evaluation Rubrics
When ending a session, you can provide your own evaluation metrics to better assess your agent’s performance:- If you don’t provide evaluation metrics, Lucidic will automatically evaluate your session using default rubrics.
- If you provide custom evaluation metrics, Lucidic will use your metrics instead of the default ones.
- Define task-specific success criteria
- Implement domain-specific evaluation metrics
- Compare agent performance across different versions using consistent metrics
- Track improvements over time with metrics tailored to your use case
Why Both is_successful
and session_eval
?
We support two types of evaluations so you can keep things simple or add more detail:
is_successful
: A binary flag — did the agent complete the task?session_eval
: A more detailed score (e.g. 1–5) showing how well the agent did.