Code Setup (Part 2)
Learn how to install LucidicAI and set up Steps, Events, and Sessions.
🧠 Note: Setup has two parts — (1) connecting it to the Lucidic Dashboard, and (2) installing LucidicAI and setting up steps, events, and sessions in your code (below).
📦 Install the SDK
Our package is published on PyPI under the name lucidicai
.
We recommend installing it into a virtual environment or as part of your existing agent stack.
🧪 Minimum Requirements
- Python 3.8+
- Compatible with all major LLM frameworks and agent libraries
- Works in local, server, or cloud environments
🐍 Using the Python SDK
Once you’ve installed Lucidic and connected it to the dashboard, you’re ready to start recording your agent’s behavior, step by step.
This guide walks you through how to initialize a session, track steps and events, and optionally pull prompts from our Prompt DB.
🚀 Step 1: Create a Session
Creating a session requires the init
function to be called on the lai
object: lai.init()
. The initialization is parameterized by the following:
Name | Type | Description |
---|---|---|
session_name | str | A name for your session – whatever you want (e.g. "Search Wikipedia" ) |
provider | str | The LLM provider you’re using – "openai" , "anthropic" , or "langchain" |
lucidic_api_key | str (optional IF set as environment variable) | Your Lucidic API key – you should have this copied somewhere safe |
agent_id | str (optional IF set as environment variable) | Your Lucidic Agent ID – you can get this from the dashboard |
task | str (optional) | The task your agent is performing. Must be specified to show up on the dashboard (e.g. "Search Wikipedia for Steve Jobs biography" ) |
mass_sim_id | str (optional) | The mass simulation ID your agent is part of (e.g. "8021bc11-20f3-48cb-8717-6ef9017e3663" ) |
rubrics | list[str] (optional) | The rubrics you want to use for evaluation (e.g. ["default", "custom_score"] ) |
tags | list[str] (optional) | A list of tags to categorize this session. |
1) If you DO NOT have your Lucidic API Key and Lucidic Agent ID in a .env file.
You must provide the session_name
, lucidic_api_key
, agent_id
, and provider
parameters to lai.init()
.
Full version of init:
We recommend moving the LUCIDIC_API_KEY
and LUCIDIC_AGENT_ID
to environment variables. The way to do this is create a .env
file in the root of your project and add the following. Most of you will already have a .env (it is where you would keep any API key) and will just have to add a Lucidic API Key and a Lucidic Agent ID.
OR You can set the environment variables in your terminal:
2) If you DO have your Lucidic API Key and Lucidic Agent ID in a .env file.
You will just need the session_name
and provider
.
🧭 Step 2: Create Steps
Each step should represent a meaningful action or decision point.
Use lai.create_step()
and lai.end_step()
around your step logic. lai.create_step()
creates a step and allows you to attach information to that step. lai.end_step()
ends the step and optionally allows you to either adjust or add information to the created step.
lai.create_step()
and lai.end_step()
are both parameterized by the following. All fields are optional, but we recommend setting as many of them as possible to improve debugging, evaluation, and traceability.
Name | Type | Description |
---|---|---|
state | str (optional) | A short description of the current environment or UI (e.g., page title, visible content, or system state). |
action | str (optional) | What the agent did in this step (e.g., “clicked submit”, “filled out form”). |
goal | str (optional) | What the agent intended to accomplish (e.g., “navigate to checkout”, “extract user name”). |
screenshot | str (optional) | Visual context for this step, either as a base64-encoded image or a file path. |
eval_score | float (optional) | A step-level rating (e.g. 5 ) to assess performance at that point. |
eval_description | str (optional) | An explanation justifying the eval_score on why your agent performed well or poorly (e.g. “Agent performed well because it successfully extracted the username”). |
You can pass in information into lai.end_step
if any information has changed since lai.create_step
was called (but this is not necessary, you can just call lai.end_step()
).
Only one step can be active at a time, so be sure to end a step before creating a new one.
⚙️ (Optional) Step Updates
You can update steps mid-run (lai.update_step
) or retroactively (lai.update_previous_step
).
These functions can only be called INSIDE of a lai.create_step
and lai.end_step
block.
The parameters are the exact same as lai.create_step
. You would use this if you want to add/adjust any information after a step has been created.
If you want to update the current step:
If you want to update a previous step, you can use lai.update_previous_step
. This function includes an index
parameter that allows you to specify which step to update:
Name | Type | Description |
---|---|---|
index | int | The index of the step to update (e.g. -1 is the previous step, -2 is the step before that, etc.). |
Example: Step 1 —> Step 2 —> Step 3 (ongoing and current step)
If you want to update Step 2, you would use lai.update_previous_step(index=-1)
, if you want to update Step 1, you would use lai.update_previous_step(index=-2)
.
Note: If you try to update a step that doesn’t exist, it will raise an error.
🎗️ Step 3: Create Events
If your agent has tool calls or LLM usage, track them as events within each step.
If you only have LLM calls with a supported provider you don’t need to create events for them! Lucidic will automatically create events for you. For instance, if you set
provider="openai"
inlai.init()
, Lucidic will automatically create events for you for any OpenAI calls.
lai.create_event
and lai.end_event
are both parameterized by the following. All of these parameters are optional, but we recommend setting as many as possible to improve debugging, evaluation, and traceability.
Name | Type | Description |
---|---|---|
description | str (optional) | A detailed explanation of the event (e.g., “clicked submit”, “completed form entry”), or any relevant input, context, or memory for your agent. This field is often used to capture user actions, LLM prompts, document processing inputs, or data retrieval details. Include any information you consider important for your agent. |
result | str (optional) | This is more of a result/output of the event. For example, if you are doing a document processing, this could be the result of the document processing. Or if you make an API call, this could be the result of the API call. |
model | str (optional) | A lot of people use events with LLMs so if you have a custom LLM call you can set the model name here. |
cost_added | float (optional) | Add the cost of the event here. |
screenshots | list[str] (optional) | Add a list of screenshots of the event here (either a list of base64 encoded images or a list of url image paths). |
You can pass in information into lai.end_event
if any information has changed since lai.create_event
was called (but this is not necessary, you can just call lai.end_event()
).
Note: All fields will be overwritten EXCEPT for screenshots and cost_added which are appended/added.
Reminder: LLM calls from supported providers like OpenAI and Anthropic are automatically captured into events when you set the
provider
ininit()
.
🧼 Step 4: End the Session
Your session will automatically end when the script exits (via an atexit
handler), but you can also end it manually with lai.end_session
if you would like to add in evals:
Name | Type | Description |
---|---|---|
is_successful | bool (optional) | Whether the session was successful or not. |
is_successful_reason | str (optional) | A short description of why the session was successful or not. |
session_eval | float (optional) | A session-level rating (e.g. 5 ) to assess performance at that point. |
session_eval_reason | str (optional) | An explanation justifying the session_eval on why your agent performed well or poorly. |
All fields are optional — if you don’t provide them, Lucidic will automatically evaluate the session for you.
📊 Additional Information: Custom Evaluation Rubrics
When ending a session, you can provide your own evaluation metrics to better assess your agent’s performance:
- If you don’t provide evaluation metrics, Lucidic will automatically evaluate your session using default rubrics.
- If you provide custom evaluation metrics, Lucidic will use your metrics instead of the default ones.
We recommend creating your own rubric for custom tasks to ensure the evaluation aligns with your specific requirements:
Custom rubrics allow you to:
- Define task-specific success criteria
- Implement domain-specific evaluation metrics
- Compare agent performance across different versions using consistent metrics
- Track improvements over time with metrics tailored to your use case
Learn more about creating and using custom rubrics in the Evaluation Rubrics documentation.
🧠 Why Both is_successful
and session_eval
?
We support two types of evaluations so you can keep things simple or add more detail:
is_successful
: A binary flag — did the agent complete the task?session_eval
: A more detailed score (e.g. 1–5) showing how well the agent did.
If you only want to pass in one value (like a numeric score), you can compute the other yourself:
This gives you full control and lets you keep your logic consistent with your own success criteria.
📥 Step 5: (Optional) Pull Prompts from Prompt DB
Use Lucidic’s Prompt DB so you never have to go to your code to make a prompt change again! See here for more details.
Pull prompts from our prompt database in your code, change the content in the dashboard, and see them automatically update on your next agent run! We support prompt versioning and labels so you can have different versions of a prompt for different use cases (e.g. production, testing, etc.).
Name | Type | Description |
---|---|---|
prompt_name | str | The name of the prompt to fetch. |
label | str (optional) | The label of the prompt to fetch. Defaults to production . |
variables | dict (optional) | A dictionary of string key/values to interpolate into the prompt. |
cache_ttl | int (optional) | The time-to-live for the prompt in seconds. Defaults to 300 seconds (0 = no cache, -1 = cache forever, n = cache for n seconds). |
Note:
cache_ttl
is only relevant if you are going to change your prompts during an ongoing session. All prompts are fetched at the beginning of a session. Thecache_ttl
determines how long the prompt will be cached for (i.e. if you setcache_ttl=60
, the prompt will be cached for 60 seconds and if you make a change during this time, it will not be fetched until thecache_ttl
expires).
Additional notes:
{{variable}}
placeholders will be replaced using your dictionary- Missing keys will raise an error
- Unreplaced variables will trigger a warning
- Read more at Prompt DB