Kubernetes Auto-instrumentation

The OpenTelemetry (OTel) Kubernetes Operator supports injecting and configuring auto-instrumentation libraries for .NET, Java, Node.js, Python, and Go services.

Middleware Application Monitoring is built on the OpenTelemetry Operator for Kubernetes and can be used to auto-instrument your applications implemented in supported programming languages.

Prerequisites

1 Kubernetes Version

Kubernetes version 1.21.0 or above. Check with the following command:

2 Middleware Kubernetes Agent

Install Middleware Kubernetes agent using these instructions. Check that the agent is installed with the following command:

Install OTel Kubernetes Operator

OTel Kubernetes Operator should be installed after installing Middleware’s Kubernetes Agent.

In most of cases, you will need to install cert-manager along with the Operator, unless you already have cert-manager installed or have alternative methods of generating certificates in your Kubernetes cluster.

Bash

With cert-manager

Without cert-manager

Helm

With cert-manager

Without cert-manager

Instrument Kubernetes Applications

Default Configuration

In order to auto-instrument your applications, the OTel Kubernetes Operator needs to know which Kubernetes Pods to instrument and which automatic instrumentation configuration, called Instrumentation Custom Resource (CR), to use for those Pods.

The OTel Kubernetes Operator installation steps described in the Installation section also installs the default Instrumentation CR called mw-autoinstrumentation in the mw-agent-ns namespace.

To confirm that Instrumentation CR is installed, issue the folowing command

The Instrumentation CR looks like below

In the above Instrumentation CR, OTEL_EXPORTER_OTLP_ENDPOINT is set to Middleware Kubernetes Agent service endpoint using either 9319 TCP port for gRPC or 9320 TCP port for HTTP.

This Instrumentation CR will be used to annotate Pod specifications. The annotations will depend on the programming language used to implement process running inside the Pod.

The following annotations SHOULD NOT be configured in the metadata section of the Deployment, DaemonSet, or StatefulSet definitions themselves, but rather in the metadata section of the Pod specifications within these objects.

Below is the list of supported programming languages and the annotations used for auto-instrumentation.

Java

To enable auto-instrumentation for a Java application, add the following annotation to your Pod specification:

Node.js

To enable auto-instrumentation for a Node.js application, add the following annotation to your Pod specification:

Python

To enable auto-instrumentation for a Python application, add the following annotation to your Pod specification:

.NET

To enable auto-instrumentation for a .NET application and specify the runtime identifier (RID), use the following annotations:

For Linux glibc based images (default):

For Linux musl based images:

Go

To enable auto-instrumentation for a Go application, you need to set the path to the executable and ensure the container has elevated permissions. Use the following annotations in the Pod specifications:

Additionally, set the required security context for the container in the Pod specification (spec.containers) as shown below:

Resource Attributes

If you want to add resource attributes to the traces generated by auto-instrumentation, you can add annotations in the following format to your Pod specification.

your-key and "your-value" are the keys and values for the resource attributes you want to add. You can add multiple resource attributes by having multiple such annotations.

Advanced Configuration

The annotations described above for all the supported programming languages used mw-agent-ns/mw-autoinstrumentation as the value for the annotation. This is because the OTel Kubernetes Operator installed using Middleware installation instructions creates mw-autoinstrumentation Instrumentation CR in mw-agent-ns namespace.

You can also create your own Instrumentation CR in any namespace and use them in the auto-instrumentation annotations.

Below are the all possible values for the auto-instrumentation annotation:

  • "true": Inject an Instrumentation CR from the current namespace. It is expected that you will have only one Instrumentation CR defined in the current namespace. The behavior could be unpredictable if you have multiple Instrumentation CRs defined in the current namespace.
  • "my-instrumentation": Inject a specific Instrumentation CR with name my-instrumentation from the current namespace.
  • "my-other-namespace/my-instrumentation": Inject a specific Instrumentation CR with name my-instrumentation from namespace my-other-namespace. This is the option used in all our examples above with Instrumentation CR name “mw-otel-auto-instrumentation” and namespace mw-agent.
  • "false": Do not inject an Instrumentation CR. This is useful for temporarily stopping instrumentation.

When using a Pod based workloads, such as Deployment or StatefulSet, make sure to add the annotation to the Pod template section (spec.template.metadata.annotations) and not to the Deployment or Statefuset metadata section (metadata.annotations).

Explore your data on Middleware

Once the auto-instrumentation is complete and the relevant Pods are restarted, the data should start flowing to Middleware for any new requests made to your applications. You can explore APM traces data by going to the APM section of your Middleware account.

Uninstall

Bash

Uninstall cert-manager and OTel Kubernetes Operator

Uninstall only OTel Kubernetes Operator

If you wish to keep cert-manager and only uninstall OTel Kubernetes Operator, issue the command below

Helm

Uninstall cert-manager and OTel Kubernetes Operator

This will remove the OTel Kubernetes Operator AND cert-manager (including one you installed previously) from your cluster. Ensure that this is what you want to do.

Uninstall Only OTel Kubernetes Operator

If you wish to keep cert-manager and only uninstall OTel Kubernetes Operator, issue the command below

Troubleshooting

Google Kubernetes Engine

If you are using GKE private cluster, you will need to add a firewall rule that allows your GKE control plane CIDR block access to port 9443/tcp on worker nodes.

Use the command below to find out GKE control plane CIDR block

  • Replace <CLUSTER_NAME> with the name of your cluster.
  • Replace <REGION> with the region of your cluster.

For example,

Then you can add a firewall rule to allow ingress from this IP range and TCP port 9443 using command below

<GKE_CONTROL_PLANE_CIDR> and <GKE_CONTROL_PLANE_TAG> can be found by following the steps in the GKE firewall docs.

More information can be found in the Official GCP Documentation. See the GKE documentation on adding rules and the Kubernetes issue for more detail.

Additional Resources