Kubernetes Auto-instrumentation
The OpenTelemetry (OTel) Kubernetes Operator supports injecting and configuring auto-instrumentation libraries for .NET, Java, Node.js, Python, and Go services.
Middleware Application Monitoring is built on the OpenTelemetry Operator for Kubernetes and can be used to auto-instrument your applications implemented in supported programming languages.
Prerequisites
1 Kubernetes Version
Kubernetes version 1.21.0
or above. Check with the following command:
Shell
kubectl version
2 Middleware Kubernetes Agent
Install Middleware Kubernetes agent using these instructions. Check that the agent is installed with the following command:
Shell
kubectl get pods -n mw-agent-ns
Install OTel Kubernetes Operator
OTel Kubernetes Operator should be installed after installing Middleware’s Kubernetes Agent.
In most of cases, you will need to install cert-manager along with the Operator, unless you already have cert-manager installed or have alternative methods of generating certificates in your Kubernetes cluster.
Bash
With cert-manager
MW_INSTALL_CERT_MANAGER=true MW_API_KEY="<MW_API_KEY>" bash -c "$(curl -L https://install.middleware.io/scripts/mw-kube-auto-instrumentation-install.sh)"
Without cert-manager
MW_API_KEY="<MW_API_KEY>" bash -c "$(curl -L https://install.middleware.io/scripts/mw-kube-auto-instrumentation-install.sh)"
Helm
With cert-manager
helm repo add jetstack https://charts.jetstack.io --force-update helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version v1.14.5 --set installCRDs=true helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts helm install opentelemetry-operator open-telemetry/opentelemetry-operator --namespace opentelemetry-operator-system --create-namespace --values https://install.middleware.io/manifests/autoinstrumentation/helm-values.yaml kubectl apply -f https://install.middleware.io/manifests/autoinstrumentation/mw-otel-auto-instrumentation.yaml
Without cert-manager
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts helm install opentelemetry-operator open-telemetry/opentelemetry-operator --namespace opentelemetry-operator-system --create-namespace --values https://install.middleware.io/manifests/autoinstrumentation/helm-values.yaml kubectl apply -f https://install.middleware.io/manifests/autoinstrumentation/mw-otel-auto-instrumentation.yaml
Instrument Kubernetes Applications
Default Configuration
In order to auto-instrument your applications, the OTel Kubernetes Operator needs to know which Kubernetes Pods to instrument and which automatic instrumentation configuration, called Instrumentation Custom Resource (CR), to use for those Pods.
The OTel Kubernetes Operator installation steps described in the Installation section also installs the default Instrumentation CR called mw-autoinstrumentation
in the mw-agent-ns
namespace.
To confirm that Instrumentation CR is installed, issue the folowing command
kubectl get otelinst mw-autoinstrumentation -n mw-agent-ns
The Instrumentation CR looks like below
apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation metadata: name: mw-autoinstrumentation namespace: mw-agent-ns spec: exporter: endpoint: http://mw-service.mw-agent-ns:9319 propagators: - tracecontext
In the above Instrumentation CR, OTEL_EXPORTER_OTLP_ENDPOINT is set to Middleware Kubernetes Agent service endpoint using either 9319 TCP port for gRPC or 9320 TCP port for HTTP.
This Instrumentation CR will be used to annotate Pod specifications. The annotations will depend on the programming language used to implement process running inside the Pod.
The following annotations SHOULD NOT be configured in the metadata section of the Deployment, DaemonSet, or StatefulSet definitions themselves, but rather in the metadata section of the Pod specifications within these objects.
Below is the list of supported programming languages and the annotations used for auto-instrumentation.
Java
To enable auto-instrumentation for a Java application, add the following annotation to your Pod specification:
instrumentation.opentelemetry.io/inject-java: "mw-agent-ns/mw-autoinstrumentation"
Node.js
To enable auto-instrumentation for a Node.js application, add the following annotation to your Pod specification:
instrumentation.opentelemetry.io/inject-nodejs: "mw-agent-ns/mw-autoinstrumentation"
Python
To enable auto-instrumentation for a Python application, add the following annotation to your Pod specification:
instrumentation.opentelemetry.io/inject-python: "mw-agent-ns/mw-autoinstrumentation"
.NET
To enable auto-instrumentation for a .NET application and specify the runtime identifier (RID), use the following annotations:
For Linux glibc based images (default):
instrumentation.opentelemetry.io/inject-dotnet: "mw-agent-ns/mw-autoinstrumentation" instrumentation.opentelemetry.io/otel-dotnet-auto-runtime: "linux-x64" # Optional as it's the default value
For Linux musl based images:
instrumentation.opentelemetry.io/inject-dotnet: "mw-agent-ns/mw-autoinstrumentation" instrumentation.opentelemetry.io/otel-dotnet-auto-runtime: "linux-musl-x64"
Go
To enable auto-instrumentation for a Go application, you need to set the path to the executable and ensure the container has elevated permissions. Use the following annotations in the Pod specifications:
instrumentation.opentelemetry.io/inject-go: "mw-agent-ns/mw-autoinstrumentation" instrumentation.opentelemetry.io/otel-go-auto-target-exe: "/path/to/container/executable"
Additionally, set the required security context for the container in the Pod specification (spec.containers
) as shown below:
securityContext: privileged: true runAsUser: 0
Resource Attributes
If you want to add resource attributes to the traces generated by auto-instrumentation, you can add annotations in the following format to your Pod specification.
resource.opentelemetry.io/your-key: "your-value"
your-key
and "your-value"
are the keys and values for the resource attributes you want to add. You can add multiple resource attributes by having multiple such annotations.
Advanced Configuration
The annotations described above for all the supported programming languages used mw-agent-ns/mw-autoinstrumentation
as the value for the annotation. This is because the OTel Kubernetes Operator installed using Middleware installation instructions creates mw-autoinstrumentation
Instrumentation CR in mw-agent-ns
namespace.
You can also create your own Instrumentation CR in any namespace and use them in the auto-instrumentation annotations.
Below are the all possible values for the auto-instrumentation annotation:
- "true": Inject an Instrumentation CR from the current namespace. It is expected that you will have only one Instrumentation CR defined in the current namespace. The behavior could be unpredictable if you have multiple Instrumentation CRs defined in the current namespace.
- "my-instrumentation": Inject a specific Instrumentation CR with name
my-instrumentation
from the current namespace. - "my-other-namespace/my-instrumentation": Inject a specific Instrumentation CR with name
my-instrumentation
from namespacemy-other-namespace
. This is the option used in all our examples above with Instrumentation CR name “mw-otel-auto-instrumentation” and namespacemw-agent
. - "false": Do not inject an Instrumentation CR. This is useful for temporarily stopping instrumentation.
When using a Pod based workloads, such as Deployment or StatefulSet, make sure to add the annotation to the Pod template section (spec.template.metadata.annotations
) and not to the Deployment or Statefuset metadata section (metadata.annotations
).
Explore your data on Middleware
Once the auto-instrumentation is complete and the relevant Pods are restarted, the data should start flowing to Middleware for any new requests made to your applications. You can explore APM traces data by going to the APM section of your Middleware account.
Uninstall
Bash
Uninstall cert-manager and OTel Kubernetes Operator
MW_UNINSTALL_CERT_MANAGER=true MW_API_KEY="<MW_API_KEY>" bash -c "$(curl -L https://install.middleware.io/scripts/mw-kube-auto-instrumentation-uninstall.sh)"
Uninstall only OTel Kubernetes Operator
If you wish to keep cert-manager
and only uninstall OTel Kubernetes Operator, issue the command below
MW_API_KEY="<MW_API_KEY>" bash -c "$(curl -L https://install.middleware.io/scripts/mw-kube-auto-instrumentation-uninstall.sh)"
Helm
Uninstall cert-manager and OTel Kubernetes Operator
This will remove the OTel Kubernetes Operator AND cert-manager (including one you installed previously) from your cluster. Ensure that this is what you want to do.
helm uninstall opentelemetry-operator-controller-manager --namespace opentelemetry-operator-system helm uninstall cert-manager --namespace cert-manager
Uninstall Only OTel Kubernetes Operator
If you wish to keep cert-manager
and only uninstall OTel Kubernetes Operator, issue the command below
helm uninstall opentelemetry-operator-controller-manager --namespace opentelemetry-operator-system
Troubleshooting
Google Kubernetes Engine
If you are using GKE private cluster, you will need to add a firewall rule that allows your GKE control plane CIDR block access to port 9443/tcp on worker nodes.
Use the command below to find out GKE control plane CIDR block
gcloud container clusters describe <CLUSTER_NAME> \ --region <REGION> \ --format="value(privateClusterConfig.masterIpv4CidrBlock)"
- Replace
<CLUSTER_NAME>
with the name of your cluster. - Replace
<REGION>
with the region of your cluster.
For example,
gcloud container clusters describe demo-cluster --region us-central1-c --format="value(privateClusterConfig.masterIpv4CidrBlock)" # Example output: 172.16.0.0/28
Then you can add a firewall rule to allow ingress from this IP range and TCP port 9443 using command below
gcloud compute firewall-rules create cert-manager-9443 \ --source-ranges <GKE_CONTROL_PLANE_CIDR> \ --target-tags ${GKE_CONTROL_PLANE_TAG} \ --allow TCP:9443
<GKE_CONTROL_PLANE_CIDR>
and <GKE_CONTROL_PLANE_TAG>
can be found by following the steps in the GKE firewall docs.
More information can be found in the Official GCP Documentation. See the GKE documentation on adding rules and the Kubernetes issue for more detail.