Browser Data Models in RUM

Real User Monitoring (RUM) emits a small set of event types that describe what the browser did and what the user experienced:

  • Session: A container that groups all events belonging to a user visit.
  • View: A page view or SPA route change; the anchor for page‑level metrics and Core Web Vitals.
  • Resource: Any networked resource fetched by the page (XHR/Fetch, images, CSS/JS, fonts, etc.).
  • Error: Uncaught JS errors and handled errors you report.
  • Long Task: Main‑thread blocks (>50 ms) that degrade responsiveness.
  • Action: User interactions such as clicks (and custom actions where applicable).

Each event carries common attributes (e.g., service.name, env, device/OS) and type‑specific attributes (e.g., view.url, resource.status_code). Use the View as your primary pivot: most investigations start at the view, then drill into resources, long tasks, and errors for that view.

How to read these tables

  • Naming: Dot‑notation (e.g., view.url) groups related attributes. Keys are case‑sensitive.
  • Units:
    • Durations are in milliseconds unless noted.
    • Sizes are in bytes.
    • Booleans are true|false.
  • Timestamps: Epoch milliseconds (browser local clock) unless noted.
  • Cardinality:
    • Low → good for grouping and faceting (e.g., env, service.name).
    • High → good for filtering only (e.g., session.id, view.id, URLs).
  • Consistency: A session can contain multiple views; each view can emit resources, actions, long tasks, and errors.

Tip: Start every query by narrowing to a specific env and service.name, then filter a time window, then add conditions on view.url or resource.status_code.

Sessions

A session represents a contiguous user visit. It starts when the SDK loads and ends after 15 minutes of inactivity (idle timeout) or after a maximum of ~4 hours, whichever comes first. A session stitches together all views, resources, errors, and actions.

Use sessions to:

  • Correlate multiple views performed by the same user in one visit.
  • Analyse the full journey behind a crash or a slow page.
  • Review the replay (if recording is enabled) alongside metrics.

Event Types

Event TypeDescription
SessionA user session begins when a user starts browsing the web application. It contains high-level information about the user (browser, device, geo-location). It aggregates all RUM events collected during the user journey with a unique session.id attribute. Note: The session resets after 15 minutes of inactivity.
ViewA view event is generated each time a user visits a page of the web application. It is been captured in root.url attribute
ResourceA resource event is generated for images, XHR, Fetch, CSS, or JS libraries loaded on a webpage. It includes detailed loading timing information
Long TaskA long task event is generated for any task in the browser that blocks the main thread for more than 50ms
ErrorRUM collects every frontend error emitted by the browser
ClickRUM click events track user interactions during a user journey & monitors the click events

Example (filters): env = "prod" AND session.id = "..." → Inspect all views/resources/errors for that visit.

Views (Page loads & SPA route changes)

A view is emitted for traditional navigations and soft navigations (SPA route changes via History API/hash). Views are the anchor for page‑level performance and UX metrics:

  • Loading time: Tuntil the page becomes idle after navigation.
  • Core Web Vitals: LCP, CLS, INP attributed to the active view.
  • Time spent: Active time on the view (until route change or tab close).

View Metric

MetricTypeDescription
view_countnumberCount of views loaded

View Attributes

AttributeTypeDescription
browser.tracestringDefaults to true for RUM
rum_origin/originstringstring containing the Unicode serialisation of the origin of the represented URL. Example - https://middleware.io/
root.urlstringThe path part of the URL
envstringDefaults to prod can be configured during setup

Example (slow views): view.loading_time > 2500 AND env = "prod" → Show slow views; facet by view.url and device.model.name.

Core Web Vitals (attached to Views)

Core Web Vitals reflect real UX on key dimensions:

  • LCP (Largest Contentful Paint): Loading smoothness of primary content.
  • CLS (Cumulative Layout Shift): Visual stability; layout shifts without input.
  • INP (Interaction to Next Paint): Overall interaction latency, replacing FID.

Interpretation:

  • Good (typical targets): LCP ≤ 2.5 s, CLS ≤ 0.1, INP ≤ 200 ms.
  • Track distributions by view.url and app.version to catch regressions early.

This page does not define additional CWV table fields. Use the View section for CWV‑related metrics surfaced alongside views (LCP/CLS/INP are attached to views in dashboards).

Example: INP > 200 AND view.url CONTAINS "/checkout" → Investigate interactions on Checkout; open related Session Replay for context.

Resources (Network requests & assets)

Resources capture the work your page asks the network to do: XHR/Fetch calls, images, fonts, CSS, JS, and more. For each resource we record method, URL, status, timing breakdowns, and sizes (when available).

Use resources to:

  • Spot slow or failing API calls (group by resource.name or endpoint).
  • Find oversized assets (large images, CSS/JS bundles) hurting LCP.
  • See which third‑party hosts dominate time on page.

Resource Timing Metrics

MetricTypeDescription
resource_duration_mediannumber (ms)Median time spent loading the resource
resource_duration_p90number (ms)P90 time spent loading the resource
resource_first_byte_duration_mediannumber (ms)Median time spent waiting for the first byte of response to be received
resource_first_byte_duration_p90number (ms)P90 time spent waiting for the first byte of response to be received
resource_dns_duration_mediannumber (ms)Median time spent resolving the DNS request
resource_dns_duration_p90number (ms)P90 time spent resolving the DNS request
resource_download_duration_p90number (ms)P90 time spent downloading the resource
resource_connect_duration_p90number (ms)P90 time spent establishing a connection to the server

Resource Attributes

AttributeTypeDescription
resource.durationnumberEntire time spent loading the resource
resource.sizenumber (bytes)Resource size
resource.nextHopProtocolstringA string representing the network protocol used to fetch the resource. Example - http/1.1 \ h2
resource.connect.durationnumber (ms)Time spent establishing a connection to the server (connectEnd - connectStart)
resource.ssl.durationnumber (ms)Time spent for the TLS handshake. If the last request is not over HTTPS, this metric does not appear (connectEnd - secureConnectionStart)
resource.dns.durationnumber (ms)Time spent resolving the DNS name of the last request (domainLookupEnd - domainLookupStart)
resource.redirect.durationnumber (ms)Time spent on subsequent HTTP requests (redirectEnd - redirectStart)
resource.first_byte.durationnumber (ms)Time spent waiting for the first byte of response to be received (responseStart - requestStart)
resource.download.durationnumber (ms)Time spent downloading the resource (responseEnd - responseStart)
resource.typestringThe type of resource being collected (e.g.: css, javascript, media, XHR, or image)
resource.status_codenumberThe response status code (available for fetch/XHR resources only)
resource.urlstringThe resource URL
resource.url_hoststringThe host part of the URL
resource.url_pathstringThe path part of the URL
resource.url_schemestringThe protocol name of the URL (HTTP or HTTPS)
resource.provider.typestringThe resource provider type (for example, first-party, cdn, ad, or analytics)

Example (failures): resource.type = "xhr" AND resource.status_code >= 500 → List failing backend calls; facet by trace.id (if propagation is enabled) and service.name.

Errors (Runtime & reported)

Errors include uncaught JS exceptions plus errors you report manually (e.g., handled promise rejections). Each error links back to the view and session and carries message, type, stack trace, and (where possible) source file + line.

Use errors to:

  • Triage noisy client exceptions by release and environment.
  • Validate sourcemaps (minified stack traces indicate a mapping issue).
  • Link to Session Replay to see exactly what preceded the error.

Error Metrics

MetricTypeDescription
error_countnumberSum of error counts
error_ratenumberPercentage of error

Error Attributes

AttributeTypeDescription
event.typestringEvent type as error
typestringThe error type could be consoleError / uncaughtException
error.namestringThe error name (or error code in some case)
error.messagestringA concise, human-readable, one-line message explaining the event
error.stackstringThe stack trace or complementary information about the error

Example (release regression): error.type = "TypeError" AND app.version = "1.4.2" → Compare error volume vs previous version.

Long Tasks (Main‑thread blocks)

A long task is any main‑thread activity > 50 ms. Long tasks delay input processing and harm interactivity (and often inflate INP). They commonly originate from heavy JS execution, layout thrashing, or synchronous XHR.

Use long tasks to:

  • Identify scripts causing jank (facet by long_task.name or resource.name).
  • Correlate spikes with views, replays, and releases.

Long Task Timing Metrics

MetricTypeDescription
longtask_countnumberCount of long task occurred
avg_longtask_durationnumberAverage long task duration
long_task_duration_p75numberP75 long task duration

Long Task Attributes

AttributeTypeDescription
longtask.durationnumberDuration of the long task
event.typestringEvent type as longtask

Example: long_task.duration > 120 → Investigate outliers; check which view.url and script path they align with.

Actions (User interactions)

Actions represent key interactions (e.g., click/tap) and optionally custom actions your code emits. They provide the glue between user intent and technical signals (resources, long tasks, errors) that follow.

Use actions to:

  • Attribute slowness to specific clicks (e.g., “Place order” triggers heavy work).
  • Inspect interaction targets and selectors to find fragile UI.

Action Metrics

MetricTypeDescription
action_countnumberSum of actions (e.g. click/load/click/error)

Action Attributes

AttributeTypeDescription
event.typestringEvent type as click
target_elementstringThe tag name of the element on which it's called. (e.g. button / img / div)
target_xpathstringType of target as xpath
componentstringName of the componentuser-interaction

Example: action.type = "click" AND action.target.name = "Add to cart" → Trace the resulting resources and errors.

Common attributes (applies to most events)

These are attached broadly so you can slice and compare across dimensions:

  • service.name, project.name, env: Logical ownership & environment.
  • app.version: Release tag used for rollouts and regression analysis.
  • session.id, view.id: For precise scoping.
  • Device & OS: device.model.*, os.*, browser.* for platform variance.

Core

AttributeTypeDescription
session.idstringGenerates a unique session id. Valid for max 4 hours & generates new session id if idle for 15mins
project.namestringBrowser application name
service.namestringA service denotes a set of pages built by a team that offers a specific functionality in your browser application

Operating System

AttributeTypeDescription
osstringThe OS name as reported by the device (User-Agent HTTP header)
navigator.userAgentstringThe user agent string for the current browser

Geo-Location

AttributeTypeDescription
cf-ipcountrystringName of the country
cf-ipcontinentstringName of the continent (AS, NA, SA)
cf-ipcitystringThe name of the city (for example, Paris or New York)
cf-iplatitudestringLatitude code
cf-iplongitudestringLongitude code
cf-regionstringRegion name
cf-region-codestringRegion code
cf-timezonestringTimezone. Example - Asia/Kolkata
cf-postal-codestringPostal code for the region

Tip: Keep service.name stable over time; use app.version for release grouping rather than encoding versions into service names.

Trace correlation (RUM ↔ APM)

If you enabled trace propagation in the browser, RUM adds trace headers to matching hosts. When your backend accepts these headers, Middleware correlates resource events to backend traces, enabling a full path from click → request → service span → database.

Use correlation to:

  • Jump from a failing XHR in RUM to the exact backend trace.
  • Understand which services affect a slow view.

Trace identifiers are shown alongside resources in the UI when trace propagation is enabled; no additional fields are defined on this page.

Example: Filter a failing resource and select Open in Trace from the event panel.

Privacy & PII (reminder)

  • Client‑side masking: Inputs and selected DOM nodes can be masked/excluded before data leaves the browser (see Session Recording Privacy).
  • Headers & URLs: Use ignoreHeaders and ignoreUrls to avoid capturing sensitive values.
  • Data minimisation: Prefer IDs over raw user data; use hashing if a join key is required.

These controls do not change the data model—only what values are allowed to populate each field.

Practical queries (copy ideas)

  • Find slowest pages: view.loading_time > 2500 → group by view.url.
  • Track CLS regressions: CLS > 0.1 → group by app.version and view.url.
  • Investigate 5xx backends: resource.type = "xhr" AND resource.status_code >= 500 → open traces.
  • Jank hotspots: long_task.duration > 50 → facet by resource.name (script).
  • Error spikes after deploy: app.version = "X.Y.Z" AND event.type = "error" → compare to previous.

Glossary

  • Idle time / Loading time: Time from navigation/route change until no network or long tasks occur for a short window.
  • Soft navigation: Route changes without full page reload (SPA via History API/hash).
  • Largest Contentful Paint (LCP): Render time of the largest above‑the‑fold content block.
  • Cumulative Layout Shift (CLS): Sum of unexpected layout shifts.
  • Interaction to Next Paint (INP): End‑to‑end latency for user interactions.
  • Long Task: Any main‑thread task > 50 ms.

Frustration Data

Frustration signals help you identify your application’s highest points of user friction by surfacing moments when users exhibit frustration.

Frustration Metric

MetricTypeDescription
frustration_countnumberCount of frustration (e.g. rage_clicks + dead_clicks)

Frustration Attributes

AttributeTypeDescription
frustration.typestringThe type of frustration can be rage_click / dead_click

Need assistance or want to learn more about Middleware? Contact our support team at [email protected] or join our Slack channel.