Custom Metrics
Send your own business or infrastructure metrics to Middleware and visualize them alongside built-in integrations. You can:
- Post OTLP/HTTP (JSON) from the command line (cURL), or
- Emit metrics from your application using the OpenTelemetry Python SDK (OTLP gRPC).
Use resource attributes to decide where data is stored: either into an existing dataset (e.g., Host, Kubernetes) or into the Custom Metrics dataset.
Prerequisites
- Your Middleware workspace URL (e.g.,
https://<YOUR_WORKSPACE>.middleware.io
). - A Middleware API key with permission to ingest metrics.
- Outbound network access from the sender to your workspace URL.
Methods:
What this does
This method sends OTLP/HTTP JSON to POST /v1/metrics. The payload contains:
- A resource (where the series belongs)
- One or more metrics (name, description, unit, type)
- Data points (value + attributes/dimensions + timestamp)
Timestamps use time_unix_nano
: nanoseconds since Unix epoch.
Step-by-step
- Set your workspace URL and API key (use env vars or a secret manager in production).
- Prepare the JSON payload describing your metric(s).
- POST to your workspace’s
/v1/metrics
endpoint.
Example:
1API_KEY="<YOUR_API_KEY>"
2MW_ENDPOINT="https://<YOUR_WORKSPACE>.middleware.io:443"
3
4curl -X POST "$MW_ENDPOINT/v1/metrics" \
5 -H "Accept: application/json" \
6 -H "Content-Type: application/json" \
7 -H "Authorization: $API_KEY" \
8 -d @- << 'EOF'
9{
10 "resource_metrics": [
11 {
12 "resource": {
13 "attributes": [
14 {
15 "key": "mw.resource_type",
16 "value": { "string_value": "custom" }
17 }
18 ]
19 },
20 "scope_metrics": [
21 {
22 "metrics": [
23 {
24 "name": "swap-usage",
25 "description": "SWAP usage",
26 "unit": "Bytes",
27 "gauge": {
28 "data_points": [
29 {
30 "attributes": [
31 {
32 "key": "device",
33 "value": { "string_value": "nvme0n1p4" }
34 }
35 ],
36 "time_unix_nano": 1758473263000000000,
37 "asInt": 4000500678
38 }
39 ]
40 }
41 }
42 ]
43 }
44 ]
45 }
46 ]
47}
48EOF
Why these fields matter
mw.resource_type: custom
: Stores data in the Custom Metrics dataset (see mapping options below).name / description / unit
: Improves discoverability and correct charting (e.g.,Bytes
,ms
,1
).gauge
withasInt
/asDouble
: Represents a point-in-time measurement (use sum for counters, histogram for distributions).attributes
(e.g.,device
): Dimensions you can group/filter by in dashboards and alerts.time_unix_nano
: The exact time of the measurement (nanoseconds).
What this does:
Your app uses OpenTelemetry to create instruments (counters, histograms, etc.). A periodic reader exports metrics to Middleware over OTLP gRPC, including any attached attributes (dimensions).
Install Required Packages:
1pip install opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp
Use the template codebase given below to send custom metrics:
1from opentelemetry import metrics
2from opentelemetry.sdk.metrics import MeterProvider
3from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader
4from opentelemetry.exporter.otlp.proto.grpc.metric_exporter import OTLPMetricExporter
5import time
6
7# Configure OTLP Exporter to export metrics to Middleware
8exporter = OTLPMetricExporter(
9 endpoint="https://<YOUR_WORKSPACE>.middleware.io",
10 headers={"authorization": "<YOUR_API_KEY>"},
11)
12
13metric_reader = PeriodicExportingMetricReader(exporter)
14metrics.set_meter_provider(MeterProvider(metric_readers=[metric_reader]))
15
16# Get a meter
17meter = metrics.get_meter(__name__)
18
19# Define metrics
20counter = meter.create_counter(
21 name="custom_counter",
22 description="Counts something custom",
23 unit="1",
24)
25
26histogram = meter.create_histogram(
27 name="custom_histogram",
28 description="Records histogram data",
29 unit="ms",
30)
31
32# Record metrics
33while True:
34 counter.add(1, attributes={"environment": "production", "region": "us-east-1"})
35 histogram.record(100, attributes={"operation": "database_query"})
36 time.sleep(5)
Here:
- Endpoint: for OTLP gRPC use the workspace base URL (no
/v1/metrics
path). - Headers: include your API key as
authorization
. - Attributes: add stable dimensions you’ll filter/group by later (
environment
,region
,service
, etc.). - Export cadence: the
PeriodicExportingMetricReader
batches and sends on an interval; keep the process running.
Ingest Into Existing Resources
If you want your custom data to live under an existing Middleware dataset, include the required resource attribute from the table below.
Example: to attach a metric to a host, add host.id
in the request body.
Type | Resource Attributes Required | Data Will Be Stored To This Data Set |
---|---|---|
host | host.id | Host Metrics |
k8s.node | k8s.node.uid | K8s Node Metrics |
k8s.pod | k8s.pod.uid | K8s POD metrics |
k8s.deployment | k8s.deployment.uid | K8s Deployment Metrics |
k8s.daemonset | k8s.daemonset.uid | ~ |
k8s.replicaset | k8s.replicaset.uid | ~ |
k8s.statefulset | k8s.statefulset.uid | ~ |
k8s.namespace | k8s.namespace.uid | ~ |
service | service.name | ~ |
os | os.type | ~ |
Ingest custom data
If your data doesn’t fit the existing types, send it to the Custom Metrics dataset:
1mw.resource_type: custom
Any series with this resource attribute will appear under Custom Metrics.
Explore Data & Build Graphs
- Open Dashboards → add a new widget.
- Select the dataset: either Custom Metrics or the specific dataset you targeted (e.g., Host Metrics).
- Choose your metric (e.g.,
swap-usage
,custom_counter
,custom_histogram
). - Use attributes (
device
,environment
,region
, etc.) to filter or group your series. - Save the widget and compose your dashboard.
Set up Alerts
- Create an alert and select the dataset/metric you’re sending.
- Define the condition (threshold/anomaly), evaluation window, and recipients.
- Use attribute filters to scope alerts (e.g., only
environment=production
).
Troubleshooting & Best Practices
- Auth errors / no data: verify the Authorization header and the workspace URL.
- Wrong dataset: double-check the resource attribute (e.g., mw.resource_type=custom vs host.id).
- Timestamps off: make sure time_unix_nano is in nanoseconds and your sender’s clock is correct.
- Dimension drift: keep attribute keys consistent (avoid mixing env and environment).
- Secrets: don’t hard-code API keys; prefer environment variables or a secret manager.
OTLP/HTTP JSON field reference (at a glance)
Concept | Where to put it | Example |
---|---|---|
Dataset tag | resource.attributes[] | mw.resource_type=custom, host.id=abc123 |
Metric name | metrics[].name | swap-usage, custom_counter, latency |
Description | metrics[].description | “SWAP usage” |
Unit | metrics[].unit | Bytes, ms, 1 |
Type | gauge / sum / histogram | match to your data shape |
Value | asInt / asDouble | 4000500678 |
Dimensions | data_points[].attributes[] | device=nvme0n1p4, environment=production |
Timestamp | data_points[].time_unix_nano | 1758473263000000000 |
Need assistance or want to learn more about Middleware? Contact our support team at [email protected] or join our Slack channel.