Learn how to instrument your agent with Lucidic using the Python SDK.
session_name
: A name for your session #Whatever you want to name itlucidic_api_key
: Your Lucidic API key #You should have this copied somewhere safeagent_id
: Your Lucidic Agent ID #You can get this from the dashboardprovider
: The LLM provider you’re using #OpenAI, Anthropic, PydanticAI, or LangChainLUCIDIC_API_KEY
and LUCIDIC_AGENT_ID
to environment variables. The way to do this is create a .env
file in the root of your project and add the following. Most of you will already have a .env (it is where you would keep any API key) and will just have to add a Lucidic API Key and a Lucidic Agent ID.
create_step()
and end_step()
around your step logic. Each step should represent a meaningful action or decision point.
Only one step can be active at a time, so be sure to end a step before creating a new one.
state
– A short description of the current environment or UI (e.g., page title, visible content, or system state).action
– What the agent did in this step (e.g., “clicked submit”, “filled out form”).goal
– What the agent intended to accomplish (e.g., “navigate to checkout”, “extract user name”).eval_score
– A step-level rating (like "5"
or "pass"
) to assess performance at that point.eval_description
– An explanation justifying the eval_score
, useful for audits or structured reviews.screenshot
or screenshot_path
– Visual context for this step, either as a base64-encoded image or a file path.If you only have LLM calls with a supported provider you don’t need to create events for them! Lucidic will automatically create events for you.
All of these are optional, but we recommend setting at leastdescription
andresult
.
description
– A detailed explanation of the event (e.g., “clicked submit”, “completed form entry”), or any relevant input, context, or memory for your agent. This field is often used to capture user actions, LLM prompts, document processing inputs, or data retrieval details. Include any information you consider important for your agent.is_successful
– A boolean indicating whether the event was successful.result
– This is more of a result/output of the event. For example, if you are doing a document processing, this could be the result of the document processing. Or if you make an API call, this could be the result of the API call.is_finished
– A boolean indicating whether the event was finished. This will automatically be set to true when the event is ended.model
– A lot of people use events with LLMs so if you have a custom LLM call you can set the model name here.cost
– If you have a custom API call or LLM call you can set the cost here.LLM calls from supported providers like OpenAI and Anthropic are automatically captured into events when you set theprovider
ininit()
.
prompt_name
: Prompt namelabel
: Label to fetch (defaults to production
)variables
: Dictionary of string key/values to interpolatecache_ttl
:
0
= no cache-1
= cache forevern
= cache for n
seconds{{variable}}
placeholders will be replaced using your dictionaryatexit
handler), but you can also end it manually if you would like to add in evals:
is_successful
and session_eval
?is_successful
: A binary flag — did the agent complete the task?session_eval
: A more detailed score (e.g. 1–5) showing how well the agent did.