OpenLIT SDK - Setup Guide for Python
This guide will walk you through the process of connecting the OpenLIT SDK to the middleware, enabling you to observe Python LLM workloads from end to end. The key idea: initialise OpenLIT, pass your Middleware endpoint and headers, then run at least one real LLM call to see data in LLM Observability.
Before you begin
- Have credentials handy: Your Middleware UID (for the OTLP endpoint) and Middleware API key (for the
Authorization
header). - Two setup modes: Either pass config as function arguments or via environment variables (both shown below).
- Metrics toggle: You can disable metrics with
disable_metrics=True
orOPENLIT_DISABLE_METRICS=true
.
1. Install the SDK
Run this in your terminal to add OpenLIT:
1pip install openlit
2. Initialize the SDK
You can configure OpenLIT via function arguments or environment variables. Pick one consistent approach from the following.
A. Setup using function arguments
In your LLM application, initialise OpenLIT with your tenant endpoint and headers:
1import openlit
2
3openlit.init(
4 otlp_endpoint="https://<MW_UID>.middleware.io:443",
5 application_name="YOUR_APPLICATION_NAME",
6 otlp_headers={
7 "Authorization": "<MW_API_KEY>",
8 "X-Trace-Source": "openlit",
9 },
10)
B. Setup using environment variables
First, export the required variables so OpenLIT can auto-read them:
1export OPENLIT_OTLP_ENDPOINT="your-initial-uid"
2export OPENLIT_APPLICATION_NAME="YOUR_APPLICATION_NAME"
3export OPENLIT_OTLP_HEADERS='{"Authorization": "your-initial-token", "X-Trace-Source": "openlit"}'
Then, initialise OpenLIT in code without parameters (it will use the env vars):
1import openlit
2
3openlit.init()
Note: Metrics collection can be disabled either by passing disable_metrics=True
to init or setting OPENLIT_DISABLE_METRICS=true
.
3. Use the SDK
Here’s the example from the doc showing OpenLIT monitoring with OpenAI. Use it as a template for your first request:
1from openai import OpenAI
2import openlit
3
4openlit.init(
5 otlp_endpoint="https://<MW_UID>.middleware.io:443",
6 application_name="YOUR_APPLICATION_NAME",
7 otlp_headers={
8 "Authorization": "<MW_API_KEY>",
9 "X-Trace-Source": "openlit",
10 },
11)
12
13client = OpenAI(
14 api_key="YOUR_OPENAI_KEY"
15)
16
17chat_completion = client.chat.completions.create(
18 messages=[
19 {
20 "role": "user",
21 "content": "What is LLM Observability",
22 }
23 ],
24 model="gpt-3.5-turbo",
25)
Viewing your traces and metrics
After initialisation and at least one LLM request, open the LLM Observability section in Middleware to see traces and metrics for your Python service.
For advanced options, refer to the OpenLIT Python SDK docs.
Troubleshooting
- No data showing up: Make sure an actual LLM request was executed after
openlit.init(...)
, and that your service can reachhttps://<MW_UID>.middleware.io:443
. - Auth/tenant issues: Re-check the exact
Authorization
value (Middleware API key) and theotlp_endpoint
UID. - Env-var setup not picked up: Confirm the variable names (
OPENLIT_OTLP_ENDPOINT
,OPENLIT_APPLICATION_NAME
,OPENLIT_OTLP_HEADERS
) and then callopenlit.init()
with no arguments. - Metrics missing only: If traces appear but metrics don’t, verify you haven’t set
disable_metrics=True
orOPENLIT_DISABLE_METRICS=true
unintentionally.
Need assistance or want to learn more about using OpenLIT with Middleware? Refer to the OpenLIT documentation or contact our support team at [email protected].