Installing the Datadog Agent in Middleware (Dual Shipping)
Middleware supports the ingestion of APM traces, metrics and logs from the Datadog Agent's Dual Shipping feature.
This feature is only supported for Datadog's Linux and Kubernetes Agents
Prerequisites
1 Traces
Datadog Agent version 6.7.0 or above
2 HTTP Logs
Datadog Agent version 6.13 or above
3 Metrics
Datadog Agent version 6.17 or above
Linux Agent
Add Trace Configuration
Add the following additional_endpoints within the apm_config section in your datadog.yaml file:
If you already have the apm_config section in your datadog.yaml file, add an additional endpoint in the same section. Datadog agent does not support multiple apm_config sections.
The Linux Datadog Agent is configured by default in the following YAML file: /etc/datadog-agent/datadog.yaml
apm_config:
enabled: true
additional_endpoints:
"https://<uid>.middleware.io":
- "<MW_API_KEY>"Add Log Configuration
Add the following additional_endpoints within the logs_config section in your datadog.yaml file:
Unlike the APM Traces, you only need to provide your hostname (<uid>.middleware.io) without the protocol scheme (https://)
logs_config:
use_http: true
additional_endpoints:
- api_key: "<MW_API_KEY>"
Host: "<uid>.middleware.io"
Port: 443
is_reliable: trueAdd Metric Configuration
Add the following to the additional_endpoints section.
additional_endpoints:
"https://<uid>.middleware.io":
- "<MW_API_KEY>"Unlike traces and logs which are under apm_config and logs_config sections, the additional_endpoints section for metrics is at the top level (same level as apm_config and logs_config).
In order to get visibility into the processes running on your infrastructure, enable live processes collection under the process_config section and add additional_endpoints as shown below:
process_config:
process_collection:
enabled: true
additional_endpoints:
"https://<uid>.middleware.io":
- "<MW_API_KEY>"Restart Datadog Agent
Run the following code in your terminal to restart the Datadog Agent:
sudo systemctl restart datadog-agentKubernetes Agent
The Datadog Kubernetes agent supports Helm and Operator methods of installation. Middleware can ingest traces, metrics and logs from the Datadog agent installed using any of these methods.
Helm chart
If you have installed the Datadog Agent using their Helm Chart, you must add the following section in your datadog-values.yaml file that you created in Step 3 of their guide:
You can remove either the logs_config or the apm_config from the below file depending on what data you would like to send to Middleware.
agents:
useConfigMap: true
customAgentConfig:
logs_config:
container_collect_all: true
use_http: true
additional_endpoints:
- api_key: "<MW_API_KEY>"
Host: "<uid>.middleware.io"
Port: 443
is_reliable: true
apm_config:
additional_endpoints:
"https://<uid>.middleware.io":
- "<MW_API_KEY>"
# collects metrics
additional_endpoints:
"https://<uid>.middleware.io":
- "<MW_API_KEY>"
# collects kubernetes object data
orchestrator_explorer:
orchestrator_additional_endpoints:
"https://<uid>.middleware.io":
- "<MW_API_KEY>"
# collects process and container data
process_config:
additional_endpoints:
"https://<uid>.middleware.io":
- "<MW_API_KEY>"Below is an example of the full datadog-values.yaml file that will send logs,metrics and APM traces to the Middleware platform:
datadog:
site: "us5.datadoghq.com"
clusterName: <your cluster name>
env:
- name: DD_CHECKS_TAG_CARDINALITY
value: "orchestrator"
- name: DD_HOSTNAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
kubelet:
tlsVerify: false
apiKeyExistingSecret: datadog-secret
processAgent:
enabled: true
processCollection: true
cluster-agent:
enabled: true
orchestratorExplorer:
enabled: true
kubeStateMetricsCore:
enabled: true
metricsProvider:
enabled: true
#collectApiServicesMetrics: false
containerImageCollection:
enabled: true
clusterChecks:
enabled: true
useClusterChecksRunners: true
targetSystem: linux
logs:
enabled: true
containerCollectAll: true
# Essential configuration for Kubernetes and container data correlation
# This configuration enables the Agent to collect metrics from kubelet and containerintegration and
# associate them with the correct Kubernetes cluster.
confd:
kubelet.yaml: |-
ad_identifiers:
- _kubelet
instances:
- tags:
- dd_cluster_name:<your cluster name>
container.yaml: |-
ad_identifiers:
- _container
instances:
- tags:
- dd_cluster_name:<your cluster name>
agents:
env:
- name: DD_HOSTNAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
useConfigMap: true
customAgentConfig:
logs_config:
container_collect_all: true
use_http: true
additional_endpoints:
- api_key: "<MW_API_KEY>"
Host: "<uid>.middleware.io"
Port: 443
is_reliable: true
apm_config:
additional_endpoints:
"https://<uid>.middleware.io":
- "<MW_API_KEY>"
# collects metrics
additional_endpoints:
"https://<uid>.middleware.io":
- "<MW_API_KEY>"
# collects kubernetes object data
orchestrator_explorer:
orchestrator_additional_endpoints:
"https://<uid>.middleware.io":
- "<MW_API_KEY>"
# collects process and container data
process_config:
additional_endpoints:
"https://<uid>.middleware.io":
- "<MW_API_KEY>"
clusterAgent:
useConfigMap: true
enabled: true
# Kubenetes State related metrics
datadog_cluster_yaml:
orchestrator_explorer:
orchestrator_additional_endpoints:
"https://<uid>.middleware.io":
- "<MW_API_KEY>"
env:
- name: DD_ADDITIONAL_ENDPOINTS
value: '{"https://<uid>.middleware.io": ["<MW_API_KEY>"]}'
- name: DD_HOSTNAME
valueFrom:
fieldRef:
fieldPath: spec.nodeNameFor cloud providers such as Amazon EKS and Azure AKS, you must override the DD_HOSTNAME environment variable to ensure consistent and stable host naming. If not overridden can lead to fragmented host-level metrics.
Kubernetes Operator
If you have installed the Datadog Agent using their Kubernetes operator, you must add the following section in datadog-agent.yaml that you create in step 3 of their guide.
The current specification section only enables logs and traces. To enable metrics, traces, and logs, please refer to the complete YAML configuration later below.
spec:
...
override:
nodeAgent:
customConfigurations:
datadog.yaml:
configData: |
logs_enabled: true
apm_config:
enabled: true
additional_endpoints:
- "https://<uid>.middleware.io":
- "<MW_API_KEY>"
logs_config:
use_http: true
container_collect_all: true
additional_endpoints:
- api_key: "<MW_API_KEY>"
Host: "<uid>.middleware.io"
Port: 443
is_reliable: trueBelow is an example of full datadog-agent.yaml file that will send metrics, logs and APM traces to the Middleware platform:
kind: DatadogAgent
apiVersion: datadoghq.com/v2alpha1
metadata:
name: datadog
spec:
global:
clusterName: "your-cluster-name"
site: us5.datadoghq.com
credentials:
apiSecret:
secretName: datadog-secret
keyName: api-key
kubelet:
tlsVerify: false
features:
orchestratorExplorer:
enabled: true
scrubContainers: true
kubeStateMetricsCore:
enabled: true
processDiscovery:
enabled: true
oomKill:
enabled: true
liveContainerCollection:
enabled: true
eventCollection:
collectKubernetesEvents: true
logCollection:
enabled: true
containerCollectAll: true
liveProcessCollection:
enabled: true
apm:
enabled: true
hostPortConfig:
enabled: true
override:
clusterAgent:
env:
- name: DD_CLUSTER_NAME # Add explicit cluster name env var
value: "your-cluster-name"
- name: DD_HOSTNAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: DD_ADDITIONAL_ENDPOINTS
value: '{"https://<uid>.middleware.io": ["<MW_API_KEY>"]}'
- name: DD_ORCHESTRATOR_EXPLORER_ORCHESTRATOR_ADDITIONAL_ENDPOINTS
value: '{"https://<uid>.middleware.io": ["<MW_API_KEY>"]}'
nodeAgent:
env:
- name: DD_ORCHESTRATOR_EXPLORER_ORCHESTRATOR_ADDITIONAL_ENDPOINTS
value: '{"https://<uid>.middleware.io": ["<MW_API_KEY>"]}'
- name: DD_HOSTNAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
customConfigurations:
datadog.yaml:
configData: |-
additional_endpoints:
"https://<uid>.middleware.io":
- "<MW_API_KEY>"
logs_config:
container_collect_all: true
use_http: true
additional_endpoints:
- api_key: "<MW_API_KEY>"
Host: "<uid>.middleware.io"
Port: 443
is_reliable: true
apm_config:
additional_endpoints:
"https://<uid>.middleware.io":
- "<MW_API_KEY>"
process_config:
additional_endpoints:
"https://<uid>.middleware.io":
- "<MW_API_KEY>"
extraConfd:
configDataMap:
kubelet.yaml: |-
ad_identifiers:
- _kubelet
instances:
- tags:
- dd_cluster_name:your-cluster-name
container.yaml: |-
ad_identifiers:
- _container
instances:
- tags:
- dd_cluster_name:your-cluster-nameMiddleware only supports datadoghq.com/v2alpha1 apiVersion for Datadog agent resource.
To find out the resource version currently in use by Datadog Kubernetes operator, issue the command below:
kubectl api-resources --api-group=datadoghq.comThe output of above command should look like below:
NAME SHORTNAMES APIVERSION NAMESPACED KIND
datadogagents dd datadoghq.com/v2alpha1 true DatadogAgent
...
...APIVERSION should show datadoghq.com/v2alpha1 for the Datadog agent.
Amazon ECS on EC2 Cluster
Middleware can ingest logs and APM traces from the Datadog Agent running on ECS clusters that use EC2 instances.
For more information on ingesting Log and APM Traces using Amazon ECS with Datadog, navigate here.
Add Trace Config
Add the following JSON script to your ECS task definition. The below environment variables are required for dual shipping APM traces in the Datadog Agent.
The DD_APM_ENABLED and DD_APM_ADDITION_ENDPOINTS environment variables are in addition to existing environment variables (e.g. DD_API_KEY, DD_SITE, etc.) that may already be defined in your Datadog Agent’s task definition.
"environment": [
...
{
"name": "DD_APM_ENABLED",
"value": "true"
},
{
"name": "DD_APM_ADDITIONAL_ENDPOINTS",
"value": "{\"https://<uid>.middleware.io\": [\"<MW_API_KEY>\"]}"
},
]Add Log Config
Add the following JSON script to your ECS task definition. The below environment variables are required for dual shipping Logs in the Datadog Agent.
The four environment variables mentioned above are in addition to existing environment variables (e.g. DD_API_KEY, DD_SITE, etc.) that may already be defined in your Datadog Agent’s task definition.
"environment": [
{
"name": "DD_LOGS_CONFIG_USE_HTTP",
"value": "true"
},
{
"name": "DD_LOGS_ENABLED",
"value": "true"
},
{
"name": "DD_LOGS_CONFIG_CONTAINER_COLLECT_ALL",
"value": "true"
},
{
"name": "DD_LOGS_CONFIG_ADDITIONAL_ENDPOINTS",
"value": "[{\"api_key\": \"<MW_API_KEY>\", \"Host\": \"<uid>.middleware.io\", \"Port\": 443, \"is_reliable\": true}]"
}
]Update Agent Service
Once you have updated the ECS task definition for the Datadog Agent, update the relevant Datadog Agent Service to redeploy the agent with your new configuration. APM traces and logs will start flowing into your Middleware account.
FAQ
How do I stop sending APM traces and logs to Datadog and only send them to Middleware?
If you want to stop sending APM traces and logs to Datadog, you can change the api_key (or environment variable DD_API_KEY) in the /etc/datadog/datadog.yaml file to something invalid.
The Datadog Agent does not work if you comment out your api_key or set it to an empty value.
Below is an example of setting your api_key and site to invalid values:
api_key: usingmiddleware
site: example.comWhy can't I see my APM traces and logs on Middleware?
- Execute the following command to ensure the Datadog Linux Agent is in an active state:
sudo systemctl status datadog-agentVerify that your Middleware API key and target are correct. The
apm_configtarget ishttp://<uid>.middleware.iowhereas thelogs_configHostfield value is<uid>.middleware.io(withouthttps://).Make sure you only have one
apm_configandlogs_configsection in the following configuration file:/etc/datadog/datadog.yamlCheck your Datadog Kubernetes Agent. Ensure the
datadog-agentandcluster-agentpods are operational with a liveness and readiness check for both pods.Check your Kubernetes Helm chart installation method by checking the
datadog-values.yamland ensure that theagentsection is at the same level as thedatadogsection. Theagentsection should NOT be inside thedatadogsection.
Need assistance or want to learn more about Middleware? Contact our support team at [email protected] or join our Slack channel.