OpenTelemetry Python Instrumentation
This guide shows you how to instrument your Python application with OpenTelemetry and send traces to Middleware. The auto-instrumentation approach works with Django, Flask, FastAPI, Falcon, Celery, and most Python libraries out of the box.
Prerequisites
- Python 3.8 or newer
- Middleware account & project: your Middleware UID (
https://<uid>.middleware.io) and API key (MW_API_KEY) - Network access to
https://<uid>.middleware.io:443for OTLP/gRPC or OTLP/HTTP
For a deeper overview of Middleware's OTLP endpoints and headers, see the OpenTelemetry getting started guide.
Quick start (Linux / macOS / VM)
The steps below cover a typical app-on-a-VM (or bare-metal) setup. Kubernetes, Docker, and Windows use the same core commands but different ways of setting environment variables.
1. Set environment variables
Choose a transport. gRPC is the most common default; HTTP/protobuf is useful when gRPC is blocked by proxies or network policy.
Configure OpenTelemetry to send data directly to Middleware using OTLP/gRPC:
export OTEL_SERVICE_NAME="my-python-service"
export OTEL_EXPORTER_OTLP_ENDPOINT="https://<MW_UID>.middleware.io:443"
export OTEL_EXPORTER_OTLP_HEADERS="authorization=<MW_API_KEY>"
export OTEL_EXPORTER_OTLP_PROTOCOL="grpc"
# optional: add custom resource attributes;
export OTEL_RESOURCE_ATTRIBUTES="mw.resource.type=custom"
# optional: disable metrics if needed;
export OTEL_METRICS_EXPORTER="none"Configure OpenTelemetry to send data directly to Middleware using OTLP/HTTP:
export OTEL_SERVICE_NAME="my-python-service"
export OTEL_EXPORTER_OTLP_TRACES_ENDPOINT="https://<MW_UID>.middleware.io:443/v1/traces"
export OTEL_EXPORTER_OTLP_HEADERS="authorization=<MW_API_KEY>"
export OTEL_EXPORTER_OTLP_TRACES_PROTOCOL="http/protobuf"
# optional: add custom resource attributes;
export OTEL_RESOURCE_ATTRIBUTES="mw.resource.type=custom"
# optional: disable metrics if needed;
export OTEL_METRICS_EXPORTER="none"If you use /v1/traces, you must use an OTLP/HTTP exporter (http/protobuf). For OTLP/gRPC exporters, use the root endpoint (:443) without /v1/*.
Replace:
<uid>with your Middleware project UID.<MW_API_KEY>with your Middleware API key.<service-name>with something meaningful (for example,payments-api).
OpenTelemetry auto-instruments popular HTTP clients and other libraries. If metrics are enabled, every outgoing HTTP call can produce metric data. Starting with OTEL_METRICS_EXPORTER=none keeps volume low while you validate traces. You can switch it to otlp later when you are ready to send metrics.
Optional (recommended when you export logs): enable trace context injection into Python logs:
export OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true
export OTEL_PYTHON_LOG_CORRELATION=true2. Install OpenTelemetry packages
Install the core OTel distribution and OTLP exporter:
pip install opentelemetry-distro opentelemetry-exporter-otlp3. Install instrumentation for your dependencies
Detect installed libraries (web frameworks, DB clients, HTTP clients, etc.) and add matching instrumentation packages:
opentelemetry-bootstrap --action=installRun this after installing your application's dependencies. Only already-installed libraries are instrumented.
4. Run your application with auto-instrumentation
Wrap your normal run command with the OTel launcher:
opentelemetry-instrument <your_run_command>Examples:
opentelemetry-instrument python app.pyopentelemetry-instrument gunicorn -k uvicorn.workers.UvicornWorker main:app
Framework-specific notes are in Framework run commands.
Other deployment targets
The OpenTelemetry behaviour is the same everywhere; only how you declare environment variables changes.
Add the same environment variables to your Deployment (or StatefulSet) manifest:
env:
- name: OTEL_SERVICE_NAME
value: "my-python-service"
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: "https://<MW_UID>.middleware.io:443"
- name: OTEL_EXPORTER_OTLP_HEADERS
value: "authorization=<MW_API_KEY>"
- name: OTEL_EXPORTER_OTLP_PROTOCOL
value: "grpc"
- name: OTEL_TRACES_EXPORTER
value: "otlp"
- name: OTEL_METRICS_EXPORTER
value: "none"Build your image with the OTel packages installed (see "Install OpenTelemetry packages" above) and run your app with:
opentelemetry-instrument <your_run_command>For cluster-wide control, you can instead send data to a Kubernetes OpenTelemetry Collector and have the Collector export to Middleware (see Optional: OpenTelemetry Collector).
The OpenTelemetry Operator can inject Python auto-instrumentation into your pods without changing your application image.
Install the Operator
Follow the upstream OpenTelemetry Operator installation docs.Create an
Instrumentationresource for Python
apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
name: python-instrumentation
spec:
exporter:
# Usually a cluster-local Collector; see "Optional: OpenTelemetry Collector" below.
endpoint: http://otel-collector:4318
propagators:
- tracecontext
- baggage
env:
- name: OTEL_METRICS_EXPORTER
value: "none"
python:
image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-python:latest- Annotate your Deployment
metadata:
annotations:
instrumentation.opentelemetry.io/inject-python: "true"
instrumentation.opentelemetry.io/otel-python-platform: "glibc" # or "musl" for AlpineThe Collector configured in "Optional: OpenTelemetry Collector" should then forward data to Middleware using OTLP.
Use the PowerShell syntax for environment variables:
$env:OTEL_RESOURCE_ATTRIBUTES = "service.name=<service-name>"
$env:OTEL_EXPORTER_OTLP_ENDPOINT = "https://<MW_UID>.middleware.io:443"
$env:OTEL_EXPORTER_OTLP_HEADERS = "authorization=<MW_API_KEY>"
$env:OTEL_EXPORTER_OTLP_PROTOCOL = "grpc"
$env:OTEL_TRACES_EXPORTER = "otlp"
$env:OTEL_METRICS_EXPORTER = "none"Install packages and run your application exactly as in the "Quick start" section above:
pip install opentelemetry-distro opentelemetry-exporter-otlp
opentelemetry-bootstrap --action=install
opentelemetry-instrument <your_run_command>You can bake configuration into your Dockerfile:
ENV OTEL_RESOURCE_ATTRIBUTES=service.name=<service-name>
ENV OTEL_EXPORTER_OTLP_ENDPOINT=https://<MW_UID>.middleware.io:443
ENV OTEL_EXPORTER_OTLP_HEADERS=authorization=<MW_API_KEY>
ENV OTEL_EXPORTER_OTLP_PROTOCOL=grpc
ENV OTEL_TRACES_EXPORTER=otlp
ENV OTEL_METRICS_EXPORTER=noneOr pass them at runtime:
docker run \
-e OTEL_RESOURCE_ATTRIBUTES="service.name=<service-name>" \
-e OTEL_EXPORTER_OTLP_ENDPOINT="https://<MW_UID>.middleware.io:443" \
-e OTEL_EXPORTER_OTLP_HEADERS="authorization=<MW_API_KEY>" \
-e OTEL_EXPORTER_OTLP_PROTOCOL="grpc" \
-e OTEL_TRACES_EXPORTER="otlp" \
-e OTEL_METRICS_EXPORTER="none" \
your-image:latestInside the container, ensure you start your app with:
opentelemetry-instrument <your_run_command>Framework run commands
Choose your framework below to see the run command. The setup steps above are the same for all frameworks. Auto-instrumentation works by starting your process through opentelemetry-instrument.
Prerequisites: Set the DJANGO_SETTINGS_MODULE environment variable:
export DJANGO_SETTINGS_MODULE=myproject.settingsRun command:
opentelemetry-instrument python manage.py runserver --noreloadAlways use --noreload with Django. The auto-reload mechanism spawns child processes that break OpenTelemetry instrumentation.
For Docker users:
CMD ["opentelemetry-instrument", "python", "manage.py", "runserver", "0.0.0.0:8000", "--noreload"]Run command:
opentelemetry-instrument flask run --no-reloadOr if running directly:
opentelemetry-instrument python app.pyAlways use --no-reload with Flask. The reloader spawns a child process that breaks OpenTelemetry instrumentation. Also avoid FLASK_ENV=development as it enables the reloader.
For Docker users:
CMD ["opentelemetry-instrument", "flask", "run", "--host=0.0.0.0", "--no-reload"]Or:
CMD ["opentelemetry-instrument", "python", "app.py"]Run command:
opentelemetry-instrument uvicorn main:app --host 0.0.0.0 --port 8000Do not use --reload with Uvicorn when instrumenting. The reload mode spawns new processes that break instrumentation.
Uvicorn's --workers flag is not supported with opentelemetry-instrument. Use Gunicorn with Uvicorn workers instead: gunicorn -k uvicorn.workers.UvicornWorker main:app
For Docker users:
CMD ["opentelemetry-instrument", "uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]Run command (with Gunicorn):
opentelemetry-instrument gunicorn app:api --bind 0.0.0.0:8000Or with Waitress:
opentelemetry-instrument waitress-serve --port=8000 app:apiFor Docker users:
CMD ["opentelemetry-instrument", "gunicorn", "app:api", "--bind", "0.0.0.0:8000"]Gunicorn generally works without extra configuration:
opentelemetry-instrument gunicorn myproject.wsgi:applicationIf you use --preload, the app loads before workers fork and spans can get stuck. Add a post-fork hook (see OpenTelemetry fork-process-model) or disable preload.
Run command:
opentelemetry-instrument celery -A tasks worker --loglevel=infoReplace tasks with your Celery app module name.
For Docker users:
CMD ["opentelemetry-instrument", "celery", "-A", "tasks", "worker", "--loglevel=info"]Celery instrumentation captures task execution spans, including task name, arguments, and status. Both the worker and the code that enqueues tasks should be instrumented for full trace propagation.
Celery with prefork workers (advanced): Celery uses the prefork worker model by default. The OpenTelemetry SDK is not fork-safe, so you must initialize it in each worker process using the worker_process_init signal. Add this to your Celery app file:
from celery.signals import worker_process_init
from opentelemetry.instrumentation.celery import CeleryInstrumentor
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
@worker_process_init.connect(weak=False)
def init_celery_tracing(*args, **kwargs):
CeleryInstrumentor().instrument()
resource = Resource.create({})
trace.set_tracer_provider(TracerProvider(resource=resource))
span_processor = BatchSpanProcessor(OTLPSpanExporter())
trace.get_tracer_provider().add_span_processor(span_processor)This ensures each worker process creates its own tracer instance. Only required for workers, not for the code that enqueues tasks.
Hypercorn and Unicorn are not supported by OpenTelemetry auto-instrumentation due to fork-safety limitations.
Why they don't work: The OpenTelemetry SDK components (BatchSpanProcessor, PeriodicExportingMetricReader, BatchLogProcessor) spawn background threads and use locks, which are not fork-safe (see Python issue #6721). Most ASGI servers work around this using register_at_fork hooks to reinitialize after forking. However, Hypercorn and Unicorn use the spawn method to start worker processes, which doesn't invoke these hooks—making the workaround ineffective.
Recommended alternative: Use Gunicorn with Uvicorn workers instead:
opentelemetry-instrument gunicorn -k uvicorn.workers.UvicornWorker main:app --bind 0.0.0.0:8000For Docker users:
CMD ["opentelemetry-instrument", "gunicorn", "-k", "uvicorn.workers.UvicornWorker", "main:app", "--bind", "0.0.0.0:8000"]This gives you the same ASGI capabilities with proper OpenTelemetry support. See Hypercorn issue #215 for updates on native Hypercorn support.
uWSGI loads the app before forking by default. Enable lazy app loading so each worker initialises OpenTelemetry after fork.
In uwsgi.ini:
lazy-apps = trueThen start uWSGI via:
opentelemetry-instrument uwsgi --ini uwsgi.iniTroubleshooting
Check environment variables are set:
echo $OTEL_EXPORTER_OTLP_ENDPOINT
echo $OTEL_SERVICE_NAME
echo $OTEL_EXPORTER_OTLP_HEADERSIf your service is running and env vars are correct, use the Console exporter tab to confirm spans are being created.
To verify that spans are being created locally, temporarily export to the console:
OTEL_TRACES_EXPORTER=console opentelemetry-instrument <your_run_command>If you see JSON span output in your terminal but traces don't appear in Middleware, the issue is with export configuration (endpoint, auth, or network). If no output appears, the instrumentation isn't capturing your requests.
Application servers that create worker processes (or use hot reload) need extra care.
Why do multi-worker servers drop spans? The OpenTelemetry SDK isn't fork-safe. Application servers that spawn multiple worker processes require special handling:
- Uvicorn with the
--workersflag is not supported. Use Gunicorn with Uvicorn workers instead:opentelemetry-instrument gunicorn -k uvicorn.workers.UvicornWorker main:app - Hypercorn/Unicorn are not supported due to fork-safety issues. See the Hypercorn/Unicorn tab for details and workarounds.
- Gunicorn with
--preloador uWSGI need extra config. See Running with Gunicorn or uWSGI below.
Hot reload breaks instrumentation: Don't run your app in reloader/hot-reload mode. For Flask, avoid FLASK_ENV=development. For Django, use --noreload. For Uvicorn/FastAPI, don't use --reload.
If grpcio installation fails, use the HTTP exporter instead:
pip install opentelemetry-exporter-otlp-proto-httpThen set OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf and use the per-signal HTTP endpoints (e.g. OTEL_EXPORTER_OTLP_TRACES_ENDPOINT) as described in the OpenTelemetry getting started guide.
Auto-instrumentation detects and instruments common database libraries. Ensure you've run opentelemetry-bootstrap --action=install after installing your database drivers.
PostgreSQL note: psycopg2 is preferred over psycopg2-binary for auto-instrumentation. If you must use psycopg2-binary, you may need to use manual instrumentation with Psycopg2Instrumentor().instrument(skip_dep_check=True).
Common database instrumentations installed by bootstrap:
- PostgreSQL:
opentelemetry-instrumentation-psycopg2 - MySQL:
opentelemetry-instrumentation-pymysqloropentelemetry-instrumentation-mysql - MongoDB:
opentelemetry-instrumentation-pymongo - Redis:
opentelemetry-instrumentation-redis - SQLAlchemy:
opentelemetry-instrumentation-sqlalchemy
Check supported versions for compatibility with your library versions.
Running with Gunicorn or uWSGI
Show details
Gunicorn works out of the box. No extra setup needed unless you use the --preload flag. If you do use --preload, see the post_fork hook example.
uWSGI requires one of these options:
- Add
lazy-apps = trueto your uWSGI config (recommended, simplest fix) - Or implement a post_fork hook (see example)
OpenTelemetry's span exporter uses a background thread. When a server forks worker processes, this thread doesn't copy over correctly, causing spans to get stuck.
Gunicorn loads your app in each worker after forking (so the thread starts fresh). With --preload, it loads before forking, causing the same issue.
uWSGI loads your app before forking by default. Setting lazy-apps = true makes it load after forking instead.
Optional: OpenTelemetry Collector
Instead of sending data straight from your app to Middleware, you can route everything through an OpenTelemetry Collector:
- Applications -> Collector using OTLP.
- Collector -> Middleware using OTLP/gRPC or OTLP/HTTP to
https://<uid>.middleware.io.
Benefits:
- Central place for sampling, filtering, and redaction.
- Easier rollouts when you change exporters or destinations.
- Ability to mirror data to multiple backends if needed.
You can find end-to-end Collector examples in the OpenTelemetry getting started guide.
More configuration and troubleshooting
More configuration
For full agent configuration—CLI options, environment variable mapping, and Python-specific options (excluded URLs, request attributes, logging, disabling instrumentations)—see the official OpenTelemetry Python agent configuration docs.
Troubleshooting
For common issues such as package installation, Flask debug/reloader behavior, pre-fork servers (Gunicorn, Uvicorn workers), and gRPC connectivity, see Troubleshooting Python automatic instrumentation on the OpenTelemetry site.
Next steps and references
- Add custom spans and business attributes: Python manual instrumentation
- Manual instrumentation (custom spans/attributes): OpenTelemetry Python docs at opentelemetry.io.
- Auto-instrumentation library list: See the upstream instrumentation packages list in the OpenTelemetry Python contrib repo at github.com.
- Prefer Middleware SDK features (profiling, Middleware options, Host Agent): use the main Python guide.
Need assistance or want to learn more about Middleware? Contact our support team at [email protected] or join our Slack channel.