AWS Lambda

TracesMetricsApp LogsCustom LogsProfiling

Why does Lambda need a special setup?

  • Early exits & freezing: Lambda can return and freeze the container quickly, so you need a configuration that flushes spans immediately.
  • Layers by language: Auto-instrumentation is delivered via AWS-managed ADOT layers (or community OTel layers) and must be combined with a Collector layer. Order matters.
  • X-Ray: Enabling Active Tracing gives better service maps and context propagation.
AWS Lambda APM layer architecture and data flow

1. Prerequisites

  • AWS account with permissions to manage Lambda, layers, and environment variables.
  • Your Middleware account key.
  • (Optional) Enable Active tracing in Lambda if you also want AWS X-Ray correlation.

Note on ARNs: The OpenTelemetry community layers are regional and published by AWS account 184161586896. Always copy the latest ARNs for your Region and architecture (amd64/arm64) from the official opentelemetry-lambda Releases page.

Example template:

1arn:aws:lambda:<region>:184161586896:layer:opentelemetry-nodejs-0_16_0:1
Node.js (JavaScript)
Python
Java
Go
.NET

1 Copy the ARN for the OpenTelemetry (OTel) Collector Lambda Layer:

  • Navigate to the OpenTelemetry Collector Lambda Repository and locate the appropriate layer for your environment.
  • Copy the Amazon Resource Name (ARN) and change the <region> and <amd|arm> tags to the region and cpu architecture of your Lambda.
    1arn:aws:lambda:<region>:184161586896:layer:opentelemetry-collector-<amd64|arm64>-0_11_0:1

2 Copy the ARN for the OTel Auto Instrumentation Layer:

  • Navigate to the OpenTelemetry Lambda Repository and locate the appropriate layer for your environment.
  • Copy the ARN and and change the <region> tag to the region your Lambda is in.
    1arn:aws:lambda:<region>:184161586896:layer:opentelemetry-nodejs-0_9_0:4

3 In the AWS Console, navigate to Lambda → Functions → your specific Lambda function and create a Layer for each ARN above

The Layers above must be added in order - Collector, then Auto Instrumentation - or they will not work.

4 Review the added layer and ensure it matches the ARN. Finally, click Add to complete the process.

5 Set these environment variables:

If you are using *.mjs / *.ts you may need to set NODE_OPTIONS=--import ./lambda-config.[mjs/ts]

1AWS_LAMBDA_EXEC_WRAPPER=/opt/otel-handler
2NODE_OPTIONS=--require ./lambda-config.js
3OTEL_SERVICE_NAME=your-service-name
4OTEL_EXPORTER_OTLP_ENDPOINT=https://<MW_UID>.middleware.io:443
5OTEL_RESOURCE_ATTRIBUTES=mw.account_key=<MW_API_KEY>
6OTEL_LAMBDA_DISABLE_AWS_CONTEXT_PROPAGATION=true
7OTEL_PROPAGATORS=tracecontext

6 Create a file named lambda-config.js in your Lambda function's root directory with the following content:

If you are using *.mjs / *.ts you may need to set NODE_OPTIONS=--import ./lambda-config.[mjs/ts]

1const { SimpleSpanProcessor } = require("@opentelemetry/sdk-trace-base");
2const {
3  OTLPTraceExporter,
4} = require("@opentelemetry/exporter-trace-otlp-proto");
5
6global.configureTracerProvider = (tracerProvider) => {
7  const spanProcessor = new SimpleSpanProcessor(new OTLPTraceExporter());
8  tracerProvider.addSpanProcessor(spanProcessor);
9};

(Optional) Custom spans in code

To capture logic specific to your business, you can use the OpenTelemetry API as follows:

1const { trace } = require("@opentelemetry/api");
2const { logs, SeverityNumber } = require("@opentelemetry/api-logs");
3
4// Returns a tracer from the global tracer provider
5// Use this to create spans
6const tracer = trace.getTracer("<MODULE-OR-FILE-NAME>", "1.0.0");
7
8/* Example simple logger */
9const logger = {
10  info: (message, attributes = {}) => {
11    // Create a log record of type INFO
12    const logRecord = {
13      timestamp: Date.now(),
14      severityText: "INFO",
15      severityNumber: SeverityNumber.INFO,
16      body: message,
17      attributes: {
18        "deployment.environment": process.env.ENVIRONMENT || "development",
19        ...attributes,
20      },
21    };
22    // Emit the log
23    logs.getLogger("<MODULE-OR-FILE-NAME>").emit(logRecord);
24    // Console log in tandem 
25    console.log(message);
26  },
27};
28
29// Lambda handler
30exports.handler = async (event, context) => {
31  // Start a span with the name "lambdaHandler"
32  const span = tracer.startSpan("lambdaHandler");
33
34  try {
35    // Add a span attribute
36    span.setAttribute("event", JSON.stringify(event));
37
38    // Your Lambda function logic here
39    const result = "Hello from Lambda!";
40
41    // Use the logger created above to emit logs
42    // Logs will be automatically associated with the spans they are 
43    // emitted in
44    logger.info("dummy log from aws lambda");
45    logger.info("dummy log from aws lambda with custom attributes", {
46      foo: "bar",
47    });
48    span.addEvent("Lambda execution completed");
49
50    return {
51      statusCode: 200,
52      body: JSON.stringify({ message: result }),
53    };
54  } catch (error) {
55    span.recordException(error);
56    return {
57      statusCode: 500,
58      body: JSON.stringify({ error: error.message }),
59    };
60  } finally {
61    span.end();
62  }
63};

1 Copy the ARN for the OpenTelemetry (OTel) Collector Lambda Layer:

  • Navigate to the OpenTelemetry Collector Lambda Repository and locate the appropriate layer for your environment. Here is a list of Python layers
  • Copy the Amazon Resource Name (ARN) and change the <region> and <amd|arm> tags to the region and CPU architecture of your Lambda.
    1arn:aws:lambda:<region>:184161586896:layer:opentelemetry-collector-<amd64|arm64>-0_13_0:1
  • Create a new file named collector.yaml with the following contents, along with your Lambda function code:
    1receivers:
    2  telemetryapi: {}
    3  otlp:
    4    protocols:
    5      grpc:
    6        endpoint: "localhost:9319"
    7      http:
    8        endpoint: "localhost:9320"
    9
    10exporters:
    11  debug:
    12    verbosity: detailed
    13  otlp:
    14    endpoint: 'https://your-initial-uid.middleware.io:443'
    15    headers:
    16      authorization: your-initial-token
    17
    18processors:
    19  decouple: {}  
    20  resource:
    21    attributes:
    22    - action: upsert
    23      key: service.name
    24      value: my-service-name
    25
    26service:
    27  pipelines:
    28    traces:
    29      receivers:
    30      - telemetryapi
    31      - otlp
    32      processors:
    33      - resource
    34      - decouple
    35      exporters:
    36      - otlp
    37    logs:
    38      receivers:
    39      - telemetryapi     
    40      - otlp   
    41      processors:
    42      - resource
    43      - decouple
    44      exporters:
    45      - otlp

2 Copy the ARN for the OTel Auto Instrumentation Layer:

  • Navigate to the OpenTelemetry Lambda Repository and locate the appropriate layer for your environment.
  • Copy the ARN and and change the <region> tag to the region your Lambda is in.
    1arn:aws:lambda:<region>:184161586896:layer:opentelemetry-python-0_12_0:1
  • In the AWS Console navigate to Lambda → Functions → your specific Lambda function and create a Layer for each ARN above.

    The Layers above must be added in order - Collector, then Auto Instrumentation - or they will not work.

  • Review the added layer and ensure it matches the ARN. Finally, click Add to complete the process.
  • Set the following environment variables in your Lambda function configuration:
    1AWS_LAMBDA_EXEC_WRAPPER=/opt/otel-instrument
    2OPENTELEMETRY_COLLECTOR_CONFIG_URI=/var/task/collector.yaml
    3OPENTELEMETRY_EXTENSION_LOG_LEVEL=info
    4OTEL_BSP_SCHEDULE_DELAY=500
    5OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:9320
    6OTEL_LAMBDA_DISABLE_AWS_CONTEXT_PROPAGATION=true
    7OTEL_PROPAGATORS=tracecontext
  • Update Log format to JSON for your Lambda function by navigating to Lambda → Functions → your specific Lambda function → Configuration → Monitoring and operations tools. Click on Edit and set log format to JSON.
    Cloudplatform
  • Import and set up a LoggerProvider to send log messages at desired log levels:
    1import json
    2import logging
    3
    4def lambda_handler(event, context):
    5  # Set the LoggerProvider
    6  logger1 = logging.getLogger("logger1")
    7  logger1.setLevel(logging.DEBUG)
    8  logger1.debug("This is a debug message from aws lambda")
    9  logger1.info("This is an info message from aws lambda")
    10  
    11  # Note: This to false or else logs will show up twice in Middleware
    12  logger1.propagate = False
    13  logging.shutdown()
    14  return {
    15      'statusCode': 200,
    16      'body': json.dumps('Hello from Lambda!')
    17  }

logger1.propagate must be set to False in your code. Otherwise, logs will show up twice in your Middleware account.

1 Copy the ARN for the OpenTelemetry (OTel) Collector Lambda Layer:

  • Navigate to the OpenTelemetry Collector Lambda Repository and locate the appropriate layer for your environment. Here is a list of Java layers
  • Copy the Amazon Resource Name (ARN) and change the <region> and <amd|arm> tags to the region and CPU architecture of your Lambda.
    1arn:aws:lambda:us-east-1:184161586896:layer:opentelemetry-javaagent-0_9_0:1

2 Set the following environment variables in your Lambda function configuration:

1AWS_LAMBDA_EXEC_WRAPPER=/opt/otel-handler
2OTEL_EXPORTER_OTLP_ENDPOINT=https://<MW_UID>.middleware.io:443
3OTEL_LAMBDA_DISABLE_AWS_CONTEXT_PROPAGATION=true
4OTEL_PROPAGATORS=tracecontext
5OTEL_RESOURCE_ATTRIBUTES=mw.account_key==<YOUR_MW_ACCOUNT_KEY>
6OTEL_SERVICE_NAME={JAVA-LAMBDA-SERVICE}

3 Add Custom Instrumentation (Optional)

Your Lambda function is automatically instrumented. If you need to add custom spans or attributes, you can use the Opentelemetry API:

Add the following as dependencies:

  • Gradle:
    1plugins {
    2  id("java")
    3  id("io.opentelemetry.instrumentation.muzzle-check") version "1.28.0"
    4}
    5dependencies {
    6    implementation 'io.opentelemetry:opentelemetry-api:1.28.0'
    7    implementation 'com.amazonaws:aws-lambda-java-core:1.2.1'
    8}
    9repositories {
    10    mavenCentral()
    11}
  • Maven:
    1<dependencies>
    2    <!-- OpenTelemetry Dependencies -->
    3    <dependency>
    4        <groupId>io.opentelemetry</groupId>
    5        <artifactId>opentelemetry-api</artifactId>
    6        <version>1.28.0</version>
    7    </dependency> 
    8    
    9    <!-- AWS Lambda Dependencies -->
    10    <dependency>
    11        <groupId>com.amazonaws</groupId>
    12        <artifactId>aws-lambda-java-core</artifactId>
    13        <version>1.2.1</version>
    14    </dependency> 
    15</dependencies>

These dependencies add the OpenTelemetry API (for optional manual spans in your handler) and the AWS Lambda core interfaces (RequestHandler, Context). The ADOT Java agent layer handles auto-instrumentation and exporting, so you don’t need extra exporter SDKs here. Keep the opentelemetry-api version compatible with your Lambda runtime and the ADOT agent layer in use.

The goal is instrument your Go Lambda with the official OTel Go wrapper and send traces directly to Middleware over OTLP/HTTP (no Lambda layer required for Go).

1 Install / import the required packages

Add these imports at the top of your Lambda project’s entry point:

1import (
2  "context"
3  "log"
4
5  "github.com/aws/aws-lambda-go/lambda"
6  "go.opentelemetry.io/contrib/instrumentation/github.com/aws/aws-lambda-go/otellambda"
7  "go.opentelemetry.io/otel"
8  "go.opentelemetry.io/otel/attribute"
9  "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp"
10  "go.opentelemetry.io/otel/sdk/resource"
11  "go.opentelemetry.io/otel/sdk/trace"
12)

Here,

  • otellambda.InstrumentHandler is the official wrapper for Go Lambdas.
  • The HTTP OTLP exporter (otlptracehttp) is the correct exporter for sending spans via OTLP/HTTP.

2 Configure your Middleware account and tracer provider

Create a helper that builds a tracer provider and points the exporter directly to Middleware’s OTLP endpoint.

Important fix: with the HTTP exporter, use WithEndpointURL and include the full /v1/traces URL (or use the env var OTEL_EXPORTER_OTLP_TRACES_ENDPOINT).

1func initTracerProvider(ctx context.Context) (*trace.TracerProvider, error) {
2  // Export spans directly to Middleware (OTLP/HTTP).
3  // WithEndpointURL expects a full URL; include /v1/traces.
4  exp, err := otlptracehttp.New(ctx,
5    otlptracehttp.WithEndpointURL("https://ruplp.middleware.io/v1/traces"),
6  )
7  if err != nil {
8    return nil, err
9  }
10
11  // Add useful resource attributes (seen in Middleware APM service list/filters)
12  res := resource.NewSchemaless(
13    attribute.String("service.name", "{GO-LAMBDA-SERVICE}"),
14    attribute.String("mw.account_key", "<YOUR_MW_ACCOUNT_KEY>"),
15    attribute.String("library.language", "go"),
16  )
17
18  tp := trace.NewTracerProvider(
19    trace.WithBatcher(exp),
20    trace.WithResource(res),
21  )
22  otel.SetTracerProvider(tp)
23  return tp, nil
24}

Here,

  • WithEndpointURL (or env vars) is the recommended way to set an HTTPS URL for the HTTP exporter. Using WithEndpoint("host:port") is meant for host:port only and does not take a scheme/path.
  • Alternative (env only): set OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=https://ruplp.middleware.io/v1/traces and call otlptracehttp.New(ctx) with no options. The exporter reads the env var.

3 Add your handler (custom attributes optional)

1type MyEvent struct {
2  Name string `json:"name"`
3  Age  int    `json:"age"`
4}
5
6type MyResponse struct {
7  Message string `json:"message"`
8}
9
10func HandleLambdaEvent(ctx context.Context, event *MyEvent) (*MyResponse, error) {
11  tracer := otel.Tracer("lambda")
12  _, span := tracer.Start(ctx, "HandleLambdaEvent")
13  defer span.End()
14
15  // Add any domain attributes you want to see in traces
16  span.SetAttributes(
17    attribute.String("user.name", event.Name),
18    attribute.Int("user.age", event.Age),
19  )
20
21  return &MyResponse{Message: "ok"}, nil
22}

4 Wire the wrapper in main and ensure flush

1func main() {
2  ctx := context.Background()
3  tp, err := initTracerProvider(ctx)
4  if err != nil {
5    log.Fatalf("failed to init tracer provider: %v", err)
6  }
7  defer tp.Shutdown(ctx) // flush on shutdown
8
9  // Wrap your handler so spans are started/ended around each invocation.
10  // WithFlusher ensures batches flush before the runtime freezes.
11  lambda.Start(otellambda.InstrumentHandler(
12    HandleLambdaEvent,
13    otellambda.WithFlusher(tp),
14  ))
15}

InstrumentHandler is the supported wrapper; WithFlusher is designed for Lambda’s short-lived execution model and ensures spans are exported before the runtime freezes.

1 First, ensure that you have .NET 6 or later on your build machine. Verify with:

1dotnet --version

2 Add the Middleware agent package to your project:

1dotnet add package MW.APM

3 Add the following to your Program.cs. This wires MW.APM using the .NET 6 minimal hosting pattern and sets up console logging for quick validation.

1var builder = WebApplication.CreateBuilder(args);
2
3var configuration = new ConfigurationBuilder()
4    .SetBasePath(Directory.GetCurrentDirectory())
5    .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
6    .AddEnvironmentVariables()
7    .Build();
8
9builder.Services.ConfigureMWInstrumentation(configuration);
10
11builder.Logging.AddConfiguration(configuration.GetSection("Logging"));
12builder.Logging.AddConsole();
13builder.Logging.SetMinimumLevel(LogLevel.Debug);
14
15// Build the app
16var app = builder.Build();
17
18// Initialize MW.APM logger AFTER app is built
19Logger.Init(app.Services.GetRequiredService<ILoggerFactory>());
20
21// Define endpoints / handlers as needed
22// app.MapGet("/", () => "OK");
23
24app.Run();

4 Add your account key, target URL, and service name to appsettings.json:

1{
2  "MW": {
3    "ApiKey": "<YOUR_MW_ACCOUNT_KEY>",
4    "TargetURL": "<MW_UID>.middleware.io:443",
5    "ServiceName": "{SERVICE NAME}"
6  },
7  "Logging": {
8    "LogLevel": {
9      "Default": "Information",
10      "Microsoft.AspNetCore": "Warning"
11    }
12  }
13}

Verify Installation

To deploy and test your application, check your Middleware account for traces by navigating to APM → Services and finding the name of your function in the services list.

Cloudplatform Verify

X-Ray Tracing

Ensure X-Ray tracing is enabled for your Lambda function to get the full benefit of the instrumentation.

Xray tracing

Here’s a single, copy-friendly AWS Lambda – Environment Variables (Quick Reference) table (no extras, just the vars).

VariablePurposeExample value
AWS_LAMBDA_EXEC_WRAPPERStarts auto-instrumentation before your handlerNode/Java: /opt/otel-handler • Python: /opt/otel-instrument
OTEL_SERVICE_NAMEService name shown in Middleware{NODEJS-LAMBDA-SERVICE} / {PYTHON-LAMBDA-SERVICE} / {JAVA-LAMBDA-SERVICE}
OTEL_PROPAGATORSContext propagation formattracecontext
OTEL_RESOURCE_ATTRIBUTESResource metadata (incl. Middleware key)mw.account_key=<YOUR_MW_ACCOUNT_KEY>
OTEL_LAMBDA_DISABLE_AWS_CONTEXT_PROPAGATIONDisable AWS/X-Ray propagation if desiredtrue
NODE_OPTIONSPreload bootstrap file (Node)--require ./lambda-config.js
OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLEDAuto-capture Python logstrue
OTEL_LOGS_EXPORTERExport logs via OTLPotlp
OTEL_BSP_SCHEDULE_DELAYSpan batcher flush cadence (ms)500
OTEL_EXPORTER_OTLP_ENDPOINTDirect export base endpointhttps://ruplp.middleware.io
OTEL_EXPORTER_OTLP_TRACES_ENDPOINTDirect export (traces) full URL (HTTP)https://ruplp.middleware.io/v1/traces
OTEL_EXPORTER_OTLP_METRICS_ENDPOINTDirect export (metrics) full URL (HTTP)https://ruplp.middleware.io/v1/metrics
OTEL_EXPORTER_OTLP_LOGS_ENDPOINTDirect export (logs) full URL (HTTP)https://ruplp.middleware.io/v1/logs
OPENTELEMETRY_COLLECTOR_CONFIG_URIIf using the Collector layer, where it reads config/var/task/collector.yaml (or s3://…, https://…)
OPENTELEMETRY_EXTENSION_LOG_LEVELCollector extension log levelinfo / debug

Troubleshooting

1) “I don’t see any traces in Middleware”

Likely causes

  • The Lambda wrapper isn’t running (Node/Java/Python only).
  • You’re exporting to the wrong endpoint or missing the /v1/traces path for OTLP/HTTP (common in Go).
  • The Collector layer (if used) isn’t reading your config.
  • The function has no internet egress (VPC without NAT).

What to check

  • Wrappers: In Node, Java, Python, confirm the wrapper env var is set exactly:
    • Node/Java: AWS_LAMBDA_EXEC_WRAPPER=/opt/otel-handler
    • Python: AWS_LAMBDA_EXEC_WRAPPER=/opt/otel-instrument
  • Endpoints: For OTLP/HTTP, prefer the per-signal env var with the full path (or exporter option). Example:
    • OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=https://ruplp.middleware.io/v1/traces (Go/HTTP exporters need the /v1/traces suffix).
  • Collector (if using the Lambda Collector layer): package a config and point the extension at it:
    • Add collector.yaml to your deployment.
    • Set OPENTELEMETRY_COLLECTOR_CONFIG_URI=/var/task/collector.yaml.
    • In the file, export to Middleware via OTLP/HTTP.
  • Networking/Egress: If your function runs in a VPC, ensure it has internet access (NAT Gateway / routing) so it can reach https://ruplp.middleware.io.
  • Logs: Open CloudWatch Logs for the function to see wrapper/exporter/extension messages (Monitor → View CloudWatch logs).

2) “Traces appear sporadically or late”

Likely causes

  • The BatchSpanProcessor is batching too long for short Lambda runs.
  • The runtime is freezing before spans flush.

What to try

  • Reduce the batcher delay for short invocations:
    • OTEL_BSP_SCHEDULE_DELAY=500 (ms) (tune with BSP env vars).
  • Go: wrap your handler with the Lambda wrapper and force a flush at the end:
    • lambda.Start(otellambda.InstrumentHandler(Handle, otellambda.WithFlusher(tp))).

3) “Node.js: the wrapper runs, but my bootstrap file doesn’t load”

Likely cause

  • Using NODE_OPTIONS=--require ./lambda-config.js with an ESM (module) project.

Fix

  • For ESM projects, use --import/a loader instead of --require (which is for CommonJS).

4) “Python: logs aren’t showing up”

Likely cause

  • Auto-logging isn’t enabled for the Python OTel SDK.

Fix

  • Add:
    • OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true
    • OTEL_LOGS_EXPORTER=otlp (if exporting logs) These are standard OTel SDK knobs for Python logging & exporters.

5) “Using the Collector layer: I still see data going to X-Ray, not Middleware”

Likely causes

  • The extension is not reading your config file, or your exporter block is missing/incorrect.

Fix

  • Ensure OPENTELEMETRY_COLLECTOR_CONFIG_URI points to a file you ship (e.g., /var/task/collector.yaml).
  • In that file, define an OTLP/HTTP exporter with endpoint: https://ruplp.middleware.io and a traces pipeline that uses it.
  • Check the extension logs in CloudWatch to confirm it loaded your file.

6) “Java/Node/Python: wrapper set, still no traces”

Likely causes

  • Wrong layer ARN (Region/architecture mismatch) or outdated layer.
  • OTEL_SERVICE_NAME/mw.account_key not set.

Fix

  • Re-add the correct community OTel Lambda layer ARN for your Region/arch from the latest releases (publisher 184161586896).
  • Set OTEL_SERVICE_NAME and include your Middleware key via OTEL_RESOURCE_ATTRIBUTES=mw.account_key=<KEY>.

7) “Go: exporter configured, but nothing arrives”

Likely causes

  • Using OTEL_EXPORTER_OTLP_ENDPOINT without a per-signal endpoint/path for HTTP.
  • Missing TLS/URL settings, or not batching/flushing.

Fix

  • Prefer OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=https://ruplp.middleware.io/v1/traces or in code use otlptracehttp.WithEndpointURL("https://ruplp.middleware.io/v1/traces").
  • Keep WithFlusher.

8) “Function times out or gets killed before export”

Likely causes

  • Tight timeout/memory + long batch interval.
  • Exporter retries blocked by no egress.

Fix

  • Increase Lambda timeout slightly and reduce BSP delay (e.g., OTEL_BSP_SCHEDULE_DELAY=500).
  • Verify VPC egress (NAT).

Likely cause

  • Mixed propagation formats (AWS vs W3C).

Fix

  • Set OTEL_PROPAGATORS=tracecontext across functions to keep W3C propagation consistent. (If you need AWS/X-Ray interop, don’t disable AWS propagation.)

Need assistance or want to learn more about Middleware? Contact our support team at [email protected] or join our Slack channel.