import lucidicai as laifrom langchain_openai import ChatOpenAIlai.init(providers=["langchain"])llm = ChatOpenAI(model="gpt-4")# Streaming responses are also tracked automaticallyfor chunk in llm.stream("Tell me a story"): print(chunk.content, end="")
import lucidicai as laifrom langchain_openai import ChatOpenAIfrom langchain_core.messages import HumanMessage, SystemMessagelai.init( session_name="langchain_research", providers=["langchain"] # API key and agent ID from env vars)llm = ChatOpenAI(model="gpt-4")lai.create_step(state="Research", goal="Generate research ideas")# All LLM calls are automatically tracked as events within this stepresponse = llm.invoke([ SystemMessage(content="You are a research assistant."), HumanMessage(content="What are 5 important areas in climate science?")])response = llm.invoke([ HumanMessage(content="What methodologies are used to study each area?")])lai.end_step()
For some complex agent setups, you may need to manually attach a handler:
Copy
Ask AI
from lucidicai.telemetry.otel_handlers import OTelLangChainHandler# Get the handler from the SDKhandler = lai.Client().get_provider("langchain")# Manually attach to your agent if neededif hasattr(agent, 'callbacks'): agent.callbacks.append(handler)
In most cases, automatic instrumentation will work without manual attachment.