Custom Metrics

Send your own business or infrastructure metrics to Middleware and visualize them alongside built-in integrations. You can:

  • Post OTLP/HTTP (JSON) from the command line (cURL), or
  • Emit metrics from your application using the OpenTelemetry Python SDK (OTLP gRPC).

Use resource attributes to decide where data is stored: either into an existing dataset (e.g., Host, Kubernetes) or into the Custom Metrics dataset.

Prerequisites

  • Your Middleware workspace URL (e.g., https://<YOUR_WORKSPACE>.middleware.io).
  • A Middleware API key with permission to ingest metrics.
  • Outbound network access from the sender to your workspace URL.

Methods:

cURL
OTEL Python SDK

What this does

This method sends OTLP/HTTP JSON to POST /v1/metrics. The payload contains:

  • A resource (where the series belongs)
  • One or more metrics (name, description, unit, type)
  • Data points (value + attributes/dimensions + timestamp)

Timestamps use time_unix_nano: nanoseconds since Unix epoch.

Step-by-step

  1. Set your workspace URL and API key (use env vars or a secret manager in production).
  2. Prepare the JSON payload describing your metric(s).
  3. POST to your workspace’s /v1/metrics endpoint.

Example:

1API_KEY="<YOUR_API_KEY>"
2MW_ENDPOINT="https://<YOUR_WORKSPACE>.middleware.io:443"
3
4curl -X POST "$MW_ENDPOINT/v1/metrics" \
5  -H "Accept: application/json" \
6  -H "Content-Type: application/json" \
7  -H "Authorization: $API_KEY" \
8  -d @- << 'EOF'
9{
10  "resource_metrics": [
11    {
12      "resource": {
13        "attributes": [
14          {
15            "key": "mw.resource_type",
16            "value": { "string_value": "custom" }
17          }
18        ]
19      },
20      "scope_metrics": [
21        {
22          "metrics": [
23            {
24              "name": "swap-usage",
25              "description": "SWAP usage",
26              "unit": "Bytes",
27              "gauge": {
28                "data_points": [
29                  {
30                    "attributes": [
31                      {
32                        "key": "device",
33                        "value": { "string_value": "nvme0n1p4" }
34                      }
35                    ],
36                    "time_unix_nano": 1758473263000000000,
37                    "asInt": 4000500678
38                  }
39                ]
40              }
41            }
42          ]
43        }
44      ]
45    }
46  ]
47}
48EOF

Why these fields matter

  • mw.resource_type: custom : Stores data in the Custom Metrics dataset (see mapping options below).
  • name / description / unit: Improves discoverability and correct charting (e.g., Bytes, ms, 1).
  • gauge with asInt/asDouble: Represents a point-in-time measurement (use sum for counters, histogram for distributions).
  • attributes (e.g., device): Dimensions you can group/filter by in dashboards and alerts.
  • time_unix_nano: The exact time of the measurement (nanoseconds).

What this does:

Your app uses OpenTelemetry to create instruments (counters, histograms, etc.). A periodic reader exports metrics to Middleware over OTLP gRPC, including any attached attributes (dimensions).

Install Required Packages:

1pip install opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp

Use the template codebase given below to send custom metrics:

1from opentelemetry import metrics
2from opentelemetry.sdk.metrics import MeterProvider
3from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader
4from opentelemetry.exporter.otlp.proto.grpc.metric_exporter import OTLPMetricExporter
5import time
6
7# Configure OTLP Exporter to export metrics to Middleware
8exporter = OTLPMetricExporter(
9    endpoint="https://<YOUR_WORKSPACE>.middleware.io",
10    headers={"authorization": "<YOUR_API_KEY>"},
11)
12
13metric_reader = PeriodicExportingMetricReader(exporter)
14metrics.set_meter_provider(MeterProvider(metric_readers=[metric_reader]))
15
16# Get a meter
17meter = metrics.get_meter(__name__)
18
19# Define metrics
20counter = meter.create_counter(
21    name="custom_counter",
22    description="Counts something custom",
23    unit="1",
24)
25
26histogram = meter.create_histogram(
27    name="custom_histogram",
28    description="Records histogram data",
29    unit="ms",
30)
31
32# Record metrics
33while True:
34    counter.add(1, attributes={"environment": "production", "region": "us-east-1"})
35    histogram.record(100, attributes={"operation": "database_query"})
36    time.sleep(5)

Here:

  • Endpoint: for OTLP gRPC use the workspace base URL (no /v1/metrics path).
  • Headers: include your API key as authorization.
  • Attributes: add stable dimensions you’ll filter/group by later (environment, region, service, etc.).
  • Export cadence: the PeriodicExportingMetricReader batches and sends on an interval; keep the process running.

Ingest Into Existing Resources

If you want your custom data to live under an existing Middleware dataset, include the required resource attribute from the table below.

Example: to attach a metric to a host, add host.id in the request body.

TypeResource Attributes RequiredData Will Be Stored To This Data Set
hosthost.idHost Metrics
k8s.nodek8s.node.uidK8s Node Metrics
k8s.podk8s.pod.uidK8s POD metrics
k8s.deploymentk8s.deployment.uidK8s Deployment Metrics
k8s.daemonsetk8s.daemonset.uid~
k8s.replicasetk8s.replicaset.uid~
k8s.statefulsetk8s.statefulset.uid~
k8s.namespacek8s.namespace.uid~
serviceservice.name~
osos.type~

Ingest custom data

If your data doesn’t fit the existing types, send it to the Custom Metrics dataset:

1mw.resource_type: custom

Any series with this resource attribute will appear under Custom Metrics.

Explore Data & Build Graphs

  1. Open Dashboards → add a new widget.
  2. Select the dataset: either Custom Metrics or the specific dataset you targeted (e.g., Host Metrics).
  3. Choose your metric (e.g., swap-usage, custom_counter, custom_histogram).
  4. Use attributes (device, environment, region, etc.) to filter or group your series.
  5. Save the widget and compose your dashboard.

Set up Alerts

  1. Create an alert and select the dataset/metric you’re sending.
  2. Define the condition (threshold/anomaly), evaluation window, and recipients.
  3. Use attribute filters to scope alerts (e.g., only environment=production).

Troubleshooting & Best Practices

  • Auth errors / no data: verify the Authorization header and the workspace URL.
  • Wrong dataset: double-check the resource attribute (e.g., mw.resource_type=custom vs host.id).
  • Timestamps off: make sure time_unix_nano is in nanoseconds and your sender’s clock is correct.
  • Dimension drift: keep attribute keys consistent (avoid mixing env and environment).
  • Secrets: don’t hard-code API keys; prefer environment variables or a secret manager.

OTLP/HTTP JSON field reference (at a glance)

ConceptWhere to put itExample
Dataset tagresource.attributes[]mw.resource_type=custom, host.id=abc123
Metric namemetrics[].nameswap-usage, custom_counter, latency
Descriptionmetrics[].description“SWAP usage”
Unitmetrics[].unitBytes, ms, 1
Typegauge / sum / histogrammatch to your data shape
ValueasInt / asDouble4000500678
Dimensionsdata_points[].attributes[]device=nvme0n1p4, environment=production
Timestampdata_points[].time_unix_nano1758473263000000000

Need assistance or want to learn more about Middleware? Contact our support team at [email protected] or join our Slack channel.