DQL gives you direct access to the user events and built-in metrics collected by the New RUM Experience. Use it when you need to go beyond what
Experience Vitals and
Error Inspector show out of the box—for example, to slice Core Web Vitals by a custom dimension, track page load trends across deployments, or investigate slow requests across all your frontends.
Run the queries below in
Notebooks for ad-hoc exploration and dashboards, or convert any of them into a custom metric via OpenPipeline.
The New RUM Experience captures Core Web Vitals in both pages and views. Google recommends analyzing them at the 75th percentile of field data to assess whether your pages meet Good, Needs Improvement, or Poor thresholds. The queries below use page summaries, which align with Google's page-based specification and the built-in metrics.
LCP measures how long it takes for the largest visible content element—such as a hero image, heading, or text block—to render. Rising values can indicate that users are waiting longer for content to appear.
timeseries LCP = percentile(dt.frontend.web.page.largest_contentful_paint, 75)//, filter: frontend.name == "FRONTEND-NAME" // Optional: filter to a specific frontend
To identify which pages have the highest LCP, use the following query.
fetch user.events| filter characteristics.has_page_summary//| filter frontend.name == "FRONTEND-NAME" // Optional: filter to a specific frontend| filter isNotNull(web_vitals.largest_contentful_paint)| summarize LCP = percentile(web_vitals.largest_contentful_paint, 75), by: {page.name}| sort LCP desc
INP measures the delay between a user interaction—such as a click, tap, or keypress—and the next visual update. High values can indicate that the UI feels unresponsive, even if the initial page load was fast.
timeseries INP = percentile(dt.frontend.web.page.interaction_to_next_paint, 75)//, filter: frontend.name == "FRONTEND-NAME" // Optional: filter to a specific frontend
To identify which pages have the worst interaction responsiveness, use the following query.
fetch user.events| filter characteristics.has_page_summary//| filter frontend.name == "FRONTEND-NAME" // Optional: filter to a specific frontend| filter isNotNull(web_vitals.interaction_to_next_paint)| summarize INP = percentile(web_vitals.interaction_to_next_paint, 75), by: {page.name}| sort INP desc
CLS quantifies how much the page layout shifts unexpectedly during its lifetime. High values mean elements are shifting position unexpectedly, which can lead to accidental clicks or a disorienting reading experience.
timeseries CLS = percentile(dt.frontend.web.page.cumulative_layout_shift, 75)//, filter: frontend.name == "FRONTEND-NAME" // Optional: filter to a specific frontend| fieldsAdd CLS = CLS[] * 0.0001
The CLS metric is stored as a long value scaled by 10,000. Multiplying by 0.0001 converts it back to the standard 0–1 score.
To identify which pages have the worst visual stability, use the following query.
fetch user.events| filter characteristics.has_page_summary//| filter frontend.name == "FRONTEND-NAME" // Optional: filter to a specific frontend| filter isNotNull(web_vitals.cumulative_layout_shift)| summarize CLS = percentile(web_vitals.cumulative_layout_shift, 75), by: {page.name}| sort CLS desc
The New RUM Experience captures page load timings from the W3C Navigation Timing API as built-in metrics and as fields on navigation events. Use the queries below to monitor overall page load duration and server responsiveness across your frontends.
Load event end measures the time from navigation start to the completion of the browser's load event. Tracking values over time helps you detect regressions caused by new deployments, third-party scripts, or infrastructure changes.
timeseries load_event_end = percentile(dt.frontend.web.navigation.load_event_end, 75)//, filter: frontend.name == "FRONTEND-NAME" // Optional: filter to a specific frontend
To identify which pages have the highest load time, use the following query.
fetch user.events| filter characteristics.has_w3c_navigation_timings//| filter frontend.name == "FRONTEND-NAME" // Optional: filter to a specific frontend| summarize load_event_end = percentile(performance.load_event_end, 75), by: {page.name}| sort load_event_end desc
TTFB reflects how quickly your server responds. High TTFB often indicates server-side or CDN issues unrelated to your frontend code.
timeseries TTFB = percentile(dt.frontend.web.navigation.time_to_first_byte, 75)//, filter: frontend.name == "FRONTEND-NAME" // Optional: filter to a specific frontend
To identify which pages have the highest TTFB, use the following query.
fetch user.events| filter characteristics.has_w3c_navigation_timings//| filter frontend.name == "FRONTEND-NAME" // Optional: filter to a specific frontend| filter isNotNull(web_vitals.time_to_first_byte)| summarize TTFB = percentile(web_vitals.time_to_first_byte, 75), by: {page.name}| sort TTFB desc
Requests captured in user events include XHR and Fetch calls made during page load and user interactions. The following queries help you identify slow or failing third-party and first-party requests.
Slow XHR and Fetch requests can degrade perceived performance even after the page has loaded. Tracking request durations over time helps you detect backend or third-party slowdowns before they impact user experience.
timeseries request_duration = percentile(dt.frontend.request.duration, 75)//, filter: frontend.name == "FRONTEND-NAME" // Optional: filter to a specific frontend
To identify which URLs are slowest, use the following query.
fetch user.events| filter characteristics.has_request//| filter frontend.name == "FRONTEND-NAME" // Optional: filter to a specific frontend| filter isNotNull(url.full)| summarize {request_count = count(),duration_p75 = percentile(duration, 75),error_count = countIf(http.response.status_code < 99 OR http.response.status_code >= 400)}, by: {url.full}| sort duration_p75 desc| limit 20
The query below creates a timeseries of request counts broken down by status code class, which is useful for tracking error rate trends over time.
timeseries {requests_2xx = sum(dt.frontend.request.count, filter: http.response.status_code_class == "2xx"),requests_4xx = sum(dt.frontend.request.count, filter: http.response.status_code_class == "4xx"),requests_5xx = sum(dt.frontend.request.count, filter: http.response.status_code_class == "5xx")}//, filter: frontend.name == "FRONTEND-NAME" // Optional: filter to a specific frontend
To identify which URLs are failing, use the following query.
fetch user.events| filter characteristics.has_request//| filter frontend.name == "FRONTEND-NAME" // Optional: filter to a specific frontend| filter http.response.status_code < 99 OR http.response.status_code >= 400| summarize count = count(), by: {url.full, http.response.status_code}| sort count desc
The New RUM Experience captures several error types on the web: JavaScript exceptions, failed requests (4xx and 5xx responses), and CSP violations. The queries below help you track error trends and identify the most impactful errors to fix first.
Tracking error volumes by type helps you spot sudden spikes from broken deployments, removed API endpoints, or newly introduced regressions.
timeseries errors = sum(dt.frontend.error.count), by: {error.type}//, filter: frontend.name == "FRONTEND-NAME" // Optional: filter to a specific frontend
JavaScript exceptions can indicate bugs in your frontend's code or incompatibilities with specific browsers. Tracking them over time helps you correlate exception spikes with deployments or browser updates.
fetch user.events| filter characteristics.has_exception| filter dt.rum.agent.type == "javascript"//| filter frontend.name == "FRONTEND-NAME" // Optional: filter to a specific frontend| makeTimeseries count = count()
To identify which exceptions occur most often and on which pages, use the following query.
fetch user.events| filter characteristics.has_exception| filter dt.rum.agent.type == "javascript"//| filter frontend.name == "FRONTEND-NAME" // Optional: filter to a specific frontend| summarize count = count(), by: {exception.message, page.name}| sort count desc