Traceloop for Next.js - Setup Guide

This guide walks you through wiring the Traceloop SDK to middleware, allowing you to observe your Next.js LLM workloads end-to-end. The core idea is simple: Initialize Traceloop before your provider/framework imports so calls are captured reliably.

Before you begin

  • Have your credentials ready: Your Middleware UID (for apiEndpoint) and Middleware API key (for the Authorization header).
  • Pick the right path for your app: Follow “With page router” or “With app router” exactly as shown below, based on how your project is structured.
  • Import order matters: Initialize Traceloop before importing providers/frameworks (e.g., openai, LangChain, LlamaIndex) so hooks attach.
  • Local checks: The examples use disableBatch: true to flush spans quickly in development; keep this to dev environments only.
  • Modules to instrument: When you want a framework auto-instrumented, import the entire module (e.g., * as LlamaIndex) and list it under instrumentModules.

1. Install the SDK

Run this in your project to add the Traceloop SDK dependency:

1npm install @traceloop/node-server-sdk

2. Initialize the SDK

With page router
With the app router

Use this approach when you’re registering instrumentation centrally and loading your Node-side setup on startup.

1 Create instrumentation.ts

Create this file at your project root (outside app/ or pages/) to register Node-runtime instrumentation early:

1export async function register() {
2  if (process.env.NEXT_RUNTIME === "nodejs") {
3    await import("./instrumentation.node.ts");
4  }
5}

2 Create instrumentation.node.ts

Add your Middleware API key and tenant details where indicated, then initialize Traceloop and list the modules you want instrumented:

1import * as traceloop from "@traceloop/node-server-sdk";
2import OpenAI from "openai";
3// Make sure to import the entire module you want to instrument, like this:
4// import * as LlamaIndex from "llamaindex";
5
6traceloop.initialize({
7  appName: "YOUR_APPLICATION_NAME",
8  apiEndpoint: "https://<MW_UID>.middleware.io:443",
9  headers: {
10    Authorization: "<MW_API_KEY>",
11    "X-Trace-Source": "traceloop",
12  },
13  resourceAttributes: { key: "value" },
14  disableBatch: true,
15  instrumentModules: {
16    openAI: OpenAI,
17    // Add any other modules you'd like to instrument here
18    // for example:
19    // llamaIndex: LlamaIndex,
20  },
21});

On Next.js v12 and below, enable the instrumentation hook so this runs during build/start:

1/** @type {import('next').NextConfig} */
2const nextConfig = {
3  experimental: {
4    instrumentationHook: true,
5  },
6};
7
8module.exports = nextConfig;

For App Router setups, add the build-time dependencies and webpack rules, then initialize on each API route you want to observe.

First, install the required build dependencies exactly as shown:

1npm install --save-dev node-loader
2npm i [email protected]

1 Edit your next.config.js and add this webpack configuration

This loads native .node add-ons and silences server-side OpenTelemetry warnings:

1const nextConfig = {
2  webpack: (config, { isServer }) => {
3    config.module.rules.push({
4      test: /\.node$/,
5      loader: "node-loader",
6    });
7    if (isServer) {
8      config.ignoreWarnings = [{ module: /opentelemetry/ }];
9    }
10    return config;
11  },
12};

2 On every app API route you want to instrument, add this at the top

Initialize Traceloop before provider/framework imports so requests are captured end-to-end:

1import * as traceloop from "@traceloop/node-server-sdk";
2import OpenAI from "openai";
3// Make sure to import the entire module you want to instrument, like this:
4// import * as LlamaIndex from "llamaindex";
5
6traceloop.initialize({
7  appName: "YOUR_APPLICATION_NAME",
8  apiEndpoint: "https://<MW_UID>.middleware.io:443",
9  headers: {
10    Authorization: "<MW_API_KEY>",
11    "X-Trace-Source": "traceloop",
12  },
13  resourceAttributes: { "mw.source": "llm" },
14  disableBatch: true,
15  instrumentModules: {
16    openAI: OpenAI,
17    // Add any other modules you'd like to instrument here
18    // for example:
19    // llamaIndex: LlamaIndex,
20  },
21});

Annotate your workflows (Optional)

When your logic spans multiple steps, add a light annotation so traces read cleanly in the UI:

1async function suggestAnswers(question: string) {
2  return await withWorkflow({ name: "suggestAnswers" }, () => {
3    ...
4  });
5}

If you’re using frameworks like Haystack, Langchain, or LlamaIndex, Traceloop will automatically instrument your code, and no manual annotations are needed.

Viewing your traces

After initialization and at least one LLM request, open the LLM Observability section in Middleware to see spans and traces across your LLM calls and any dependent services (e.g., vector databases). For deeper options and patterns, refer to the Traceloop Next.js documentation.

Troubleshooting

  • Nothing shows up in the UI: Confirm at least one request executed after startup, and ensure initialization happens before provider/framework imports on routes you’re observing.
  • Instrumentation didn’t attach: Make sure you imported the entire framework module (e.g., * as LlamaIndex) and included it under instrumentModules.
  • Next.js v12 and below: Verify experimental.instrumentationHook: true is present in next.config.js so instrumentation.ts/Node setup actually runs.
  • Build issues with native add-ons: Ensure the node-loader rule exists in webpack and that you’ve installed both node-loader and [email protected] exactly as shown.
  • No spans in dev: Keep disableBatch: true during local testing to flush spans quickly, then revert for production.
  • Auth or endpoint errors: Double-check the Authorization header and apiEndpoint format: https://<MW_UID>.middleware.io:443.

Need assistance or want to learn more about using Traceloop with Middleware? Contact our support team at [email protected].