Reported error count (by OS, app version) [custom]
The number of all reported errors.
Session count (by OS, app version) [custom]
The number of captured user sessions.
Session count (by OS, app version, crash replay feature status) [mobile]
The number of captured user sessions.
Session count (by OS, app version, full replay feature status) [mobile]
The number of captured user sessions.
Reported error count (by OS, app version) [mobile]
The number of all reported errors.
User action rate - affected by JavaScript errors (by key user action, user type) [web]
The percentage of key user actions with detected JavaScript errors.
Apdex (by key user action) [web]
The average Apdex rating for key user actions.
Action count - custom action (by key user action, browser) [web]
The number of custom actions that are marked as key user actions.
Action count - load action (by key user action, browser) [web]
The number of load actions that are marked as key user actions.
Action count - XHR action (by key user action, browser) [web]
The number of XHR actions that are marked as key user actions.
Cumulative Layout Shift - load action (by key user action, user type) [web]
The score measuring the unexpected shifting of visible webpage elements. Calculated for load actions that are marked as key user actions.
Cumulative Layout Shift - load action (by key user action, geolocation, user type) [web]
The score measuring the unexpected shifting of visible webpage elements. Calculated for load actions that are marked as key user actions.
Cumulative Layout Shift - load action (by key user action, browser) [web]
The score measuring the unexpected shifting of visible webpage elements. Calculated for load actions that are marked as key user actions.
DOM interactive - load action (by key user action, browser) [web]
The time taken until a page's status is set to "interactive" and it's ready to receive user input. Calculated for load actions that are marked as key user actions.
Action duration - custom action (by key user action, browser) [web]
The duration of custom actions.
Action duration - load action (by key user action, browser) [web]
The duration of load actions that are marked as key user actions.
Action duration - XHR action (by key user action, browser) [web]
The duration of XHR actions that are marked as key user actions.
Time to first byte - load action (by key user action, browser) [web]
The time taken until the first byte of the response is received from the server, relevant application caches, or a local resource. Calculated for load actions that are marked as key user actions.
Time to first byte - XHR action (by key user action, browser) [web]
The time taken until the first byte of the response is received from the server, relevant application caches, or a local resource. Calculated for XHR actions that are marked as key user actions.
First Input Delay - load action (by key user action, user type) [web]
The time from the first interaction with a page to when the user agent can respond to that interaction. Calculated for load actions that are marked as key user actions.
First Input Delay - load action (by key user action, geolocation, user type) [web]
The time from the first interaction with a page to when the user agent can respond to that interaction. Calculated for load actions that are marked as key user actions.
First Input Delay - load action (by key user action, browser) [web]
The time from the first interaction with a page to when the user agent can respond to that interaction. Calculated for load actions that are marked as key user actions.
Largest Contentful Paint - load action (by key user action, user type) [web]
The time taken to render the largest element in the viewport. Calculated for load actions that are marked as key user actions.
Largest Contentful Paint - load action (by key user action, geolocation, user type) [web]
The time taken to render the largest element in the viewport. Calculated for load actions that are marked as key user actions.
Largest Contentful Paint - load action (by key user action, browser) [web]
The time taken to render the largest element in the viewport. Calculated for load actions that are marked as key user actions.
Load event end - load action (by key user action, browser) [web]
The time taken to complete the load event of a page. Calculated for load actions that are marked as key user actions.
Load event start - load action (by key user action, browser) [web]
The time taken to begin the load event of a page. Calculated for load actions that are marked as key user actions.
Network contribution - load action (by key user action, user type) [web]
The time taken to request and receive resources (including DNS lookup, redirect, and TCP connect time). Calculated for load actions that are marked as key user actions.
Network contribution - XHR action (by key user action, user type) [web]
The time taken to request and receive resources (including DNS lookup, redirect, and TCP connect time). Calculated for XHR actions that are marked as key user actions.
Response end - load action (by key user action, browser) [web]
(AKA HTML downloaded) The time taken until the user agent receives the last byte of the response or the transport connection is closed, whichever comes first. Calculated for load actions that are marked as key user actions.
Response end - XHR action (by key user action, browser) [web]
The time taken until the user agent receives the last byte of the response or the transport connection is closed, whichever comes first. Calculated for XHR actions that are marked as key user actions.
Server contribution - load action (by key user action, user type) [web]
The time spent on server-side processing for a page. Calculated for load actions that are marked as key user actions.
Server contribution - XHR action (by key user action, user type) [web]
The time spent on server-side processing for a page. Calculated for XHR actions that are marked as key user actions.
Speed index - load action (by key user action, browser) [web]
The score measuring how quickly the visible parts of a page are rendered. Calculated for load actions that are marked as key user actions.
Visually complete - load action (by key user action, browser) [web]
The time taken to fully render content in the viewport. Calculated for load actions that are marked as key user actions.
Visually complete - XHR action (by key user action, browser) [web]
The time taken to fully render content in the viewport. Calculated for XHR actions that are marked as key user actions.
Error count (by key user action, user type, error type, error origin) [web]
The number of detected errors that occurred during key user actions.
User action count with errors (by key user action, user type) [web]
The number of key user actions with detected errors.
JavaScript errors count during user actions (by key user action, user type) [web]
The number of detected JavaScript errors that occurred during key user actions.
JavaScript error count without user actions (by key user action, user type) [web]
The number of detected standalone JavaScript errors (occurred between key user actions).
User action rate - affected by errors (by key user action, user type) [web]
The percentage of key user actions with detected errors.
Action count - custom action (by browser) [web]
The number of custom actions.
Action count - load action (by browser) [web]
The number of load actions.
Action count - XHR action (by browser) [web]
The number of XHR actions.
Action count (by Apdex category) [web]
The number of user actions.
Action with key performance metric count (by action type, geolocation, user type) [web]
The number of user actions that have a key performance metric and mapped geolocation.
Action duration - custom action (by browser) [web]
The duration of custom actions.
Action duration - load action (by browser) [web]
The duration of load actions.
Action duration - XHR action (by browser) [web]
The duration of XHR actions.
Actions per session average (by users, user type) [web]
The average number of user actions per user session.
Session count - estimated active sessions (by users, user type) [web]
The estimated number of active user sessions. An active session is one in which a user has been confirmed to still be active at a given time. For this high-cardinality metric, the HyperLogLog algorithm is used to approximate the session count.
User count - estimated active users (by users, user type) [web]
The estimated number of unique active users. For this high-cardinality metric, the HyperLogLog algorithm is used to approximate the user count.
User action rate - affected by JavaScript errors (by user type) [web]
The percentage of user actions with detected JavaScript errors.
Apdex (by user type) [web]
Apdex (by geolocation, user type) [web]
The average Apdex rating for user actions that have a mapped geolocation.
Bounce rate (by users, user type) [web]
The percentage of sessions in which users viewed only a single page and triggered only a single web request. Calculated by dividing single-page sessions by all sessions.
Conversion rate - sessions (by users, user type) [web]
The percentage of sessions in which at least one conversion goal was reached. Calculated by dividing converted sessions by all sessions.
Session count - converted sessions (by users, user type) [web]
The number of sessions in which at least one conversion goal was reached.
Cumulative Layout Shift - load action (by user type) [web]
The score measuring the unexpected shifting of visible webpage elements. Calculated for load actions.
Cumulative Layout Shift - load action (by geolocation, user type) [web]
The score measuring the unexpected shifting of visible webpage elements. Calculated for load actions.
Cumulative Layout Shift - load action (by browser) [web]
The score measuring the unexpected shifting of visible webpage elements. Calculated for load actions.
DOM interactive - load action (by browser) [web]
The time taken until a page's status is set to "interactive" and it's ready to receive user input. Calculated for load actions.
Session count - estimated ended sessions (by users, user type) [web]
The number of completed user sessions.
Rage click count [web]
The number of detected rage clicks.
Time to first byte - load action (by browser) [web]
The time taken until the first byte of the response is received from the server, relevant application caches, or a local resource. Calculated for load actions.
Time to first byte - XHR action (by browser) [web]
The time taken until the first byte of the response is received from the server, relevant application caches, or a local resource. Calculated for XHR actions.
First Input Delay - load action (by user type) [web]
The time from the first interaction with a page to when the user agent can respond to that interaction. Calculated for load actions.
First Input Delay - load action (by geolocation, user type) [web]
The time from the first interaction with a page to when the user agent can respond to that interaction. Calculated for load actions.
First Input Delay - load action (by browser) [web]
The time from the first interaction with a page to when the user agent can respond to that interaction. Calculated for load actions.
Largest Contentful Paint - load action (by user type) [web]
The time taken to render the largest element in the viewport. Calculated for load actions.
Largest Contentful Paint - load action (by geolocation, user type) [web]
The time taken to render the largest element in the viewport. Calculated for load actions.
Largest Contentful Paint - load action (by browser) [web]
The time taken to render the largest element in the viewport. Calculated for load actions.
Load event end - load action (by browser) [web]
The time taken to complete the load event of a page. Calculated for load actions.
Load event start - load action (by browser) [web]
The time taken to begin the load event of a page. Calculated for load actions.
Network contribution - load action (by user type) [web]
The time taken to request and receive resources (including DNS lookup, redirect, and TCP connect time). Calculated for load actions.
Network contribution - XHR action (by user type) [web]
The time taken to request and receive resources (including DNS lookup, redirect, and TCP connect time). Calculated for XHR actions.
Response end - load action (by browser) [web]
(AKA HTML downloaded) The time taken until the user agent receives the last byte of the response or the transport connection is closed, whichever comes first. Calculated for load actions.
Response end - XHR action (by browser) [web]
The time taken until the user agent receives the last byte of the response or the transport connection is closed, whichever comes first. Calculated for XHR actions.
Server contribution - load action (by user type) [web]
The time spent on server-side processing for a page. Calculated for load actions.
Server contribution - XHR action (by user type) [web]
The time spent on server-side processing for a page. Calculated for XHR actions.
Session duration (by users, user type) [web]
The average duration of user sessions.
Speed index - load action (by browser) [web]
The score measuring how quickly the visible parts of a page are rendered. Calculated for load actions.
Session count - estimated started sessions (by users, user type) [web]
The number of started user sessions.
Visually complete - load action (by browser) [web]
The time taken to fully render content in the viewport. Calculated for load actions.
Visually complete - XHR action (by browser) [web]
The time taken to fully render content in the viewport. Calculated for XHR actions.
Error count (by user type, error type, error origin) [web]
The number of detected errors.
Error count during user actions (by user type, error type, error origin) [web]
The number of detected errors that occurred during user actions.
Standalone error count (by user type, error type, error origin) [web]
The number of detected standalone errors (occurred between user actions).
User action count - with errors (by user type) [web]
The number of key user actions with detected errors.
Error count for Davis (by user type, error type, error origin, error context)) [web]
The number of errors that were included in Davis AI problem detection and analysis.
Interaction to next paint
JavaScript error count - during user actions (by user type) [web]
The number of detected JavaScript errors that occurred during user actions.
JavaScript error count - without user actions (by user type) [web]
The number of detected standalone JavaScript errors (occurred between user actions).
User action rate - affected by errors (by user type) [web]
The percentage of user actions with detected errors.
Apdex (by OS, geolocation) [mobile, custom]
The Apdex rating for all captured user actions.
Apdex (by OS, app version) [mobile, custom]
The Apdex rating for all captured user actions.
User count - estimated users affected by crashes (by OS) [mobile, custom]
The estimated number of unique users affected by a crash. For this high cardinality metric, the HyperLogLog algorithm is used to approximate the number of users.
User count - estimated users affected by crashes (by OS, app version) [mobile, custom]
The estimated number of unique users affected by a crash. For this high cardinality metric, the HyperLogLog algorithm is used to approximate the number of users.
User rate - estimated users affected by crashes (by OS) [mobile, custom]
The estimated percentage of unique users affected by a crash. For this high cardinality metric, the HyperLogLog algorithm is used to approximate the number of users.
Crash count (by OS, geolocation) [mobile, custom]
The number of detected crashes.
Crash count (by OS, app version) [mobile, custom]
The number of detected crashes.
Crash count (by OS, app version) [mobile, custom]
The number of detected crashes.
User rate - estimated crash free users (by OS) [mobile, custom]
The estimated percentage of unique users not affected by a crash. For this high cardinality metric, the HyperLogLog algorithm is used to approximate the number of users.
Apdex (by key user action, OS) [mobile, custom]
The Apdex rating for all captured key user actions.
Action count (by key user action, OS, Apdex category) [mobile, custom]
The number of captured key user actions.
Action duration (by key user action, OS) [mobile, custom]
The duration of key user actions.
Reported error count (by key user action, OS) [mobile, custom]
The number of reported errors for key user actions.
Request count (by key user action, OS) [mobile, custom]
The number of captured web requests associated with key user actions.
Request duration (by key user action, OS) [mobile, custom]
The duration of web requests for key user actions. Be aware that this metric is measured in microseconds while other request duration metrics for mobile and custom apps are measured in milliseconds.
Request error count (by key user action, OS) [mobile, custom]
The number of detected web request errors for key user actions.
Request error rate (by key user action, OS) [mobile, custom]
The percentage of web requests with detected errors for key user actions
New user count (by OS) [mobile, custom]
The number of users that launched the application(s) for the first time. The metric is tied to specific devices, so users are counted multiple times if they install the application on multiple devices. The metric doesn't distinguish between multiple users that share the same device and application installation.
Request count (by OS, provider) [mobile, custom]
The number of captured web requests.
Request count (by OS, app version) [mobile, custom]
The number of captured web requests.
Request error count (by OS, provider) [mobile, custom]
The number of detected web request errors.
Request error count (by OS, app version) [mobile, custom]
The number of detected web request errors.
Request error rate (by OS, provider) [mobile, custom]
The percentage of web requests with detected errors.
Request error rate (by OS, app version) [mobile, custom]
The percentage of web requests with detected errors.
Request duration (by OS, provider) [mobile, custom]
The duration of web requests.
Request duration (by OS, app version) [mobile, custom]
The duration of web requests.
Session count (by agent version, OS) [mobile, custom]
The number of captured user sessions.
Session count (by OS, crash reporting level) [mobile, custom]
The number of captured user sessions.
Session count (by OS, data collection level) [mobile, custom]
The number of captured user sessions.
Session count - estimated (by OS, geolocation) [mobile, custom]
The estimated number of captured user sessions. For this high cardinality metric, the HyperLogLog algorithm is used to approximate the number of sessions.
Session count (by OS, app version) [mobile, custom]
The number of captured user sessions.
Action count (by geolocation, Apdex category) [mobile, custom]
The number of captured user actions.
Action count (by OS, Apdex category) [mobile, custom]
The number of captured user actions.
Action count (by OS, app version) [mobile, custom]
The number of captured user actions.
Action duration (by OS, app version) [mobile, custom]
The duration of user actions.
User count - estimated (by OS, geolocation) [mobile, custom]
The estimated number of unique users that have a mapped geolocation. The metric is based on 'internalUserId'. When 'dataCollectionLevel' is set to 'performance' or 'off', 'internalUserId' is changed at each app start. For this high cardinality metric, the HyperLogLog algorithm is used to approximate the number of users.
User count - estimated (by OS, app version) [mobile, custom]
The estimated number of unique users. The metric is based on 'internalUserId'. When 'dataCollectionLevel' is set to 'performance' or 'off', 'internalUserId' is changed at each app start. For this high cardinality metric, the HyperLogLog algorithm is used to approximate the number of users.
Session count - billed and unbilled [custom]
The number of billed and unbilled user sessions. To get only the number of billed sessions, set the "Type" filter to "Billed".
Total user action and session properties
The number of billed user action and user session properties.
Session count - billed and unbilled - with Session Replay [mobile]
The number of billed and unbilled user sessions that include Session Replay data. To get only the number of billed sessions, set the "Type" filter to "Billed".
Session count - billed and unbilled [mobile]
The total number of billed and unbilled user sessions (with and without Session Replay data). To get only the number of billed sessions, set the "Type" filter to "Billed".
Total user action and session properties
The number of billed user action and user session properties.
Session count - billed and unbilled - with Session Replay [web]
The number of billed and unbilled user sessions that include Session Replay data. To get only the number of billed sessions, set the "Type" filter to "Billed".
Session count - billed and unbilled - without Session Replay [web]
The number of billed and unbilled user sessions that do not include Session Replay data. To get only the number of billed sessions, set the "Type" filter to "Billed".
Total user action and session properties
The number of billed user action and user session properties.
(DPS) Total Custom Events Classic billing usage
The number of custom events ingested aggregated over all monitored entities. Custom events include events sent to Dynatrace via the Events API or events created by a log event extraction rule. Use this total metric to query longer timeframes without losing precision or performance.
(DPS) Custom Events Classic billing usage by monitored entity
The number of custom events ingested split by monitored entity. Custom events include events sent to Dynatrace via the Events API or events created by a log event extraction rule. For details on the events billed, refer to the usage_by_event_info metric. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Custom Events Classic billing usage by event info
The number of custom events ingested split by event info. Custom events include events sent to Dynatrace via the Events API or events created by a log event extraction rule. The info contains the context of the event plus the configuration ID. For details on the related monitored entities, refer to the usage_by_entity metric. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Recorded metric data points per metric key
The number of reported metric data points split by metric key. This metric does not account for included metric data points available to your environment.
(DPS) Total billed metric data points
The total number of metric data points after deducting the included metric data points. This is the rate-card value used for billing. Use this total metric to query longer timeframes without losing precision or performance.
(DPS) Total metric data points billable for Foundation & Discovery hosts
The number of metric data points billable for Foundation & Discovery hosts.
(DPS) Total metric data points billed for Full-Stack hosts
The number of metric data points billed for Full-Stack hosts. To view the unadjusted usage per host, use builtin:billing.full_stack_monitoring.metric_data_points.ingested_by_host . This trailing metric is reported at 15-minute intervals with up to a 15-minute delay.
(DPS) Total metric data points billed for Infrastructure-monitored hosts
The number of metric data points billed for Infrastructure-monitored hosts. To view the unadjusted usage per host, use builtin:billing.infrastructure_monitoring.metric_data_points.ingested_by_host . This trailing metric is reported at 15-minute intervals with up to a 15-minute delay.
(DPS) Total metric data points billed by other entities
The number of metric data points billed that cannot be assigned to a host. The values reported in this metric are not eligible for included metric deduction and will be billed as is. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. o view the monitored entities that consume this usage, use the other_by_entity metric.
(DPS) Billed metric data points reported and split by other entities
The number of billed metric data points split by entities that cannot be assigned to a host. The values reported in this metric are not eligible for included metric deduction and will be billed as is. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Total Custom Traces Classic billing usage
The number of spans ingested aggregated over all monitored entities. A span is a single operation within a distributed trace, ingested into Dynatrace. Use this total metric to query longer timeframes without losing precision or performance.
(DPS) Custom Traces Classic billing usage by monitored entity
The number of spans ingested split by monitored entity. A span is a single operation within a distributed trace, ingested into Dynatrace. For details on span types, refer to the usage_by_span_type metric. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Custom Traces Classic billing usage by span type
The number of spans ingested split by span type. A span is a single operation within a distributed trace, ingested into Dynatrace. Span kinds can be CLIENT, SERVER, PRODUCER, CONSUMER or INTERNAL For details on the related monitored entities, refer to the usage_by_entity metric. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
DDU events consumption by event info
License consumption of Davis data units by events pool split by event info
DDU events consumption by monitored entity
License consumption of Davis data units by events pool split by monitored entity
Total DDU events consumption
Sum of license consumption of Davis data units aggregated over all monitored entities for the events pool
DDU log consumption by log path
License consumption of Davis data units by log pool split by log path
DDU log consumption by monitored entity
License consumption of Davis data units by log pool split by monitored entity
Total DDU log consumption
Sum of license consumption of Davis data units aggregated over all logs for the log pool
DDU metrics consumption by monitored entity
License consumption of Davis data units by metrics pool split by monitored entity
DDU metrics consumption by monitored entity w/o host-unit included DDUs
License consumption of Davis data units by metrics pool split by monitored entity (aggregates host-unit included metrics, so value might be higher than actual consumption)
Reported metrics DDUs by metric key
Reported Davis data units usage by metrics pool split by metric key
Total DDU metrics consumption
Sum of license consumption of Davis data units aggregated over all metrics for the metrics pool
DDU serverless consumption by function
License consumption of Davis data units by serverless pool split by Amazon Resource Names (ARNs)
DDU serverless consumption by service
License consumption of Davis data units by serverless pool split by service
Total DDU serverless consumption
Sum of license consumption of Davis data units aggregated over all services for the serverless pool
DDU traces consumption by span type
License consumption of Davis data units by traces pool split by SpanKind, as defined in OpenTelemetry specification
DDU traces consumption by monitored entity
License consumption of Davis data units by traces pool split by monitored entity
Total DDU traces consumption
Sum of license consumption of Davis data units aggregated over all monitored entities for the traces pool
DDU included per host
Included Davis data units per host
DDU included metric data points per host
Included metric data points per host
[Deprecated] (DPS) Business events usage - Ingest & Process
Business events Ingest & Process usage, tracked as bytes ingested within the hour. This trailing metric is reported hourly for the previous hour. Metric values are reported with up to a one-hour delay. [Deprecated] This metric is replaced by billing usage events.
[Deprecated] (DPS) Business events usage - Query
Business events Query usage, tracked as bytes read within the hour. This trailing metric is reported hourly for the previous hour. Metric values are reported with up to a one-hour delay. [Deprecated] This metric is replaced by billing usage events.
[Deprecated] (DPS) Business events usage - Retain
Business events Retain usage, tracked as total storage used within the hour, in bytes. This trailing metric is reported hourly for the previous hour. Metric values are reported with up to a one-hour delay. [Deprecated] This metric is replaced by billing usage events.
(DPS) Ingested metric data points for Foundation & Discovery
The number of metric data points aggregated over all Foundation & Discovery hosts.
(DPS) Ingested metric data points for Foundation & Discovery per host
The number of metric data points split by Foundation & Discovery hosts.
(DPS) Foundation & Discovery billing usage
The total number of host-hours being monitored by Foundation & Discovery, counted in 15 min intervals.
(DPS) Foundation & Discovery billing usage per host
The host-hours being monitored by Foundation & Discovery, counted in 15 min intervals.
(DPS) Available included metric data points for Full-Stack hosts
The total number of included metric data points that can be deducted from the metric data points reported by Full-Stack hosts. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. To view the number of applied included metric data points, use builtin:billing.full_stack_monitoring.metric_data_points.included_used . If the difference between this metric and the applied metrics is greater than 0, then more metrics can be ingested using Full-Stack Monitoring without incurring additional costs.
(DPS) Used included metric data points for Full-Stack hosts
The number of consumed included metric data points per host monitored with Full-Stack Monitoring. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. To view the number of potentially available included metrics, use builtin:billing.full_stack_monitoring.metric_data_points.included_used . If the difference between this metric and the available metrics is greater than zero, then that means that more metrics could be ingested on Full-Stack hosts without incurring additional costs.
(DPS) Total metric data points reported by Full-Stack hosts
The number of metric data points aggregated over all Full-Stack hosts. The values reported in this metric are eligible for included-metric-data-point deduction. Use this total metric to query longer timeframes without losing precision or performance. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. To view usage on a per-host basis, use builtin:billing.full_stack_monitoring.metric_data_points.ingested_by_host .
(DPS) Metric data points reported and split by Full-Stack hosts
The number of metric data points split by Full-Stack hosts. The values reported in this metric are eligible for included-metric-data-point deduction. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. The pool of available included metrics for a "15-minute interval" is visible via builtin:billing.full_stack_monitoring.metric_data_points.included . To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Full-Stack Monitoring billing usage
The total GiB memory of hosts being monitored in full-stack mode, counted in 15 min intervals. Use this total metric to query longer timeframes without losing precision or performance. For details on the hosts causing the usage, refer to the usage_per_host metric. For details on the containers causing the usage, refer to the usage_per_container metric.
(DPS) Full-stack usage by container type
The total GiB memory of containers being monitored in full-stack mode, counted in 15 min intervals.
(DPS) Full-Stack Monitoring billing usage per host
The GiB memory per host being monitored in full-stack mode, counted in 15 min intervals. For example, a host with 8 GiB of RAM monitored for 1 hour has 4 data points with a value of 2
. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Available included metric data points for Infrastructure-monitored hosts
The total number of included metric data points that can be deducted from the metric data points reported by Infrastructure-monitored hosts. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. To view the number of applied included metric data points, use builtin:billing.infrastructure_monitoring.metric_data_points.included_used . If the difference between this metric and the applied metrics is greater than zero, then that means that more metrics could be ingested on Infrastructure-monitored hosts without incurring additional costs.
(DPS) Used included metric data points for Infrastructure-monitored hosts
The number of consumed included metric data points for Infrastructure-monitored hosts. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. To view the number of potentially available included metrics, use builtin:billing.infrastructure_monitoring.metric_data_points.included_used . If the difference between this metric and the available metrics is greater than zero, then that means that more metrics could be ingested on Infrastructure-monitored hosts without incurring additional costs.
(DPS) Total metric data points reported by Infrastructure-monitored hosts
The number of metric data points aggregated over all Infrastructure-monitored hosts. The values reported in this metric are eligible for included-metric-data-point deduction. Use this total metric to query longer timeframes without losing precision or performance. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. To view usage on a per-host basis, use the builtin:billing.full_stack_monitoring.metric_data_points.ingested_by_host .
(DPS) Metric data points reported and split by Infrastructure-monitored hosts
The number of metric data points split by Infrastructure-monitored hosts. The values reported in this metric are eligible for included-metric-data-point deduction. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. The pool of available included metrics for a "15-minute interval" is visible via builtin:billing.infrastructure_monitoring.metric_data_points.included . To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Infrastructure Monitoring billing usage
The total number of host-hours being monitored in infrastructure-only mode, counted in 15 min intervals. Use this total metric to query longer timeframes without losing precision or performance. For details on the hosts causing the usage, refer to the usage_per_host metric.
(DPS) Infrastructure Monitoring billing usage per host
The host-hours being monitored in infrastructure-only mode, counted in 15 min intervals. A host monitored for the whole hour has 4 data points with a value of 0.25, regardless of the memory size. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Kubernetes Platform Monitoring billing usage
The total number of monitored Kubernetes pods per hour, split by cluster and namespace and counted in 15 min intervals. A pod monitored for the whole hour has 4 data points with a value of 0.25.
(DPS) Log Management and Analytics usage - Ingest & Process
Log Management and Analytics Ingest & Process usage, tracked as bytes ingested within the hour. This trailing metric is reported hourly for the previous hour. Metric values are reported with up to a one-hour delay.
(DPS) Log Management and Analytics usage - Query
Log Management and Analytics Query usage, tracked as bytes read within the hour. This trailing metric is reported hourly for the previous hour. Metric values are reported with up to a one-hour delay.
(DPS) Log Management and Analytics usage - Retain
Log Management and Analytics Retain usage, tracked as total storage used within the hour, in bytes. This trailing metric is reported hourly for the previous hour. Metric values are reported with up to a one-hour delay.
(DPS) Total Log Monitoring Classic billing usage
The number of log records ingested aggregated over all monitored entities. A log record is recognized by either a timestamp or a JSON object. Use this total metric to query longer timeframes without losing precision or performance.
(DPS) Log Monitoring Classic billing usage by monitored entity
The number of log records ingested split by monitored entity. A log record is recognized by either a timestamp or a JSON object. For details on the log path, refer to the usage_by_log_path metric. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Log Monitoring Classic billing usage by log path
The number of log records ingested split by log path. A log record is recognized by either a timestamp or a JSON object. For details on the related monitored entities, refer to the usage_by_entity metric. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Mainframe Monitoring billing usage
The total number of MSU-hours being monitored, counted in 15 min intervals.
(DPS) Total Real-User Monitoring Property (mobile) billing usage
(Mobile) User action and session properties count. For details on how usage is calculated, refer to the documentation or builtin:billing.real_user_monitoring.web.property.usage_by_application . Use this total metric to query longer timeframes without losing precision or performance.
(DPS) Real-User Monitoring Property (mobile) billing usage by application
(Mobile) User action and session properties count by application. The billed value is calculated based on the number of sessions reported in builtin:billing.real_user_monitoring.mobile.session.usage_by_app + builtin:billing.real_user_monitoring.mobile.session_with_replay.usage_by_app . plus the number of configured properties that exceed the included number of properties (free of charge) offered for a given application. Data points are only written for billed sessions. If the value is 0, you have available metric data points. This trailing metric is reported hourly for the previous hour. Metric values are reported with up to a one-hour delay. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Total Real-User Monitoring (mobile) billing usage
(Mobile) Session count without Session Replay. The value billed for each session is the session duration measured in hours. So a 3-hour session results in a single data-point value of 3
. If two sessions end during the same minute, then the values are added together. Use this total metric to query longer timeframes without losing precision or performance. To view the application that consume this usage, refer to the usage_by_app metric.
(DPS) Real-User Monitoring (mobile) billing usage by application
(Mobile) Session count without Session Replay split by application The value billed for each session is the session duration measured in hours. So a 3-hour session results in a single data-point value of 3
. If two sessions of the same application end during the same minute, then the values are added together. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Total Real-User Monitoring (mobile) with Session Replay billing usage
(Mobile) Session count with Session Replay. The value billed for each session is the session duration measured in hours. So a 3-hour session results in a single data-point value of 3
. If two sessions end during the same minute, then the values are added together. Use this total metric to query longer timeframes without losing precision or performance. To view the application that consume this usage, refer to the usage_by_app metric.
(DPS) Real-User Monitoring (mobile) with Session Replay billing usage by application
(Mobile) Session count with Session Replay split by application. The value billed for each session is the session duration measured in hours. So a 3-hour session results in a single data-point value of 3
. If two sessions of the same application end during the same minute, then the values are added together. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Total Real-User Monitoring Property (web) billing usage
(Web) User action and session properties count. For details on how usage is calculated, refer to the documentation or builtin:billing.real_user_monitoring.web.property.usage_by_application . Use this total metric to query longer timeframes without losing precision or performance.
(DPS) Real-User Monitoring Property (web) billing usage by application
(Web) User action and session properties count by application. The billed value is calculated based on the number of sessions reported in builtin:billing.real_user_monitoring.web.session.usage_by_app + builtin:billing.real_user_monitoring.web.session_with_replay.usage_by_app . plus the number of configured properties that exceed the included number of properties (free of charge) offered for a given application. Data points are only written for billed sessions. If the value is 0, you have available metric data points. This trailing metric is reported hourly for the previous hour. Metric values are reported with up to a one-hour delay. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Total Real-User Monitoring (web) billing usage
(Web) Session count without Session Replay. The value billed for each session is the session duration measured in hours. So a 3-hour session results in a single data-point value of 3
. If two sessions end during the same minute, then the values are added together. Use this total metric to query longer timeframes without losing precision or performance. To view the application that consume this usage, refer to the usage_by_app metric.
(DPS) Real-User Monitoring (web) billing usage by application
(Web) Session count without Session Replay split by application. The value billed for each session is the session duration measured in hours. So a 3-hour session results in a single data-point value of 3
. If two sessions of the same application end during the same minute, then the values are added together. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Total Real-User Monitoring (web) with Session Replay billing usage
(Web) Session count with Session Replay. The value billed for each session is the session duration measured in hours. So a 3-hour session results in a single data-point value of 3
. If two sessions end during the same minute, then the values are added together. Use this total metric to query longer timeframes without losing precision or performance. To view the application that consume this usage, refer to the usage_by_app metric.
(DPS) Real-User Monitoring (web) with Session Replay billing usage by application
(Web) Session count with Session Replay split by application. The value billed for each session is the session duration measured in hours. So a 3-hour session results in a single data-point value of 3
. If two sessions of the same application end during the same minute, then the values are added together. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Runtime Application Protection billing usage
Total GiB-memory of hosts protected with Runtime Application Protection (Application Security), counted at 15-minute intervals. Use this total metric to query longer timeframes without losing precision or performance. For details on the monitored hosts, refer to the usage_per_host metric.
(DPS) Runtime Application Protection billing usage per host
GiB-memory per host protected with Runtime Application Protection (Application Security), counted at 15-minute intervals. For example, a host with 8 GiB of RAM monitored for 1 hour has 4 data points with a value of 2
. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Runtime Vulnerability Analytics billing usage
Total GiB-memory of hosts protected with Runtime Vulnerability Analytics (Application Security), counted at 15-minute intervals. Use this total metric to query longer timeframes without losing precision or performance. For details on the monitored hosts, refer to the usage_per_host metric.
(DPS) Runtime Vulnerability Analytics billing usage per host
GiB-memory per hosts protected with Runtime Vulnerability Analytics (Application Security), counted at 15-minute intervals. For example, a host with 8 GiB of RAM monitored for 1 hour has 4 data points with a value of 2
. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Total Serverless Functions Classic billing usage
The number of invocations of the serverless function aggregated over all monitored entities. The term "function invocations" is equivalent to "function requests" or "function executions". Use this total metric to query longer timeframes without losing precision or performance.
(DPS) Serverless Functions Classic billing usage by monitored entity
The number of invocations of the serverless function split by monitored entity. The term "function invocations" is equivalent to "function requests" or "function executions". For details on which functions are invoked, refer to the usage_by_function metric. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Serverless Functions Classic billing usage by function
The number of invocations of the serverless function split by function. The term "function invocations" is equivalent to "function requests" or "function executions". For details on the related monitored entities, refer to the usage_by_entity metric. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
Actions
The number of billed actions consumed by browser monitors.
(DPS) Total Browser Monitor or Clickpath billing usage
The number of synthetic actions which triggers a web request that includes a page load, navigation event, or action that triggers an XHR or Fetch request. Scroll downs, keystrokes, or clicks that don't trigger web requests aren't counted as such actions. Use this total metric to query longer timeframes without losing precision or performance.
(DPS) Browser Monitor or Clickpath billing usage per synthetic browser monitor
The number of synthetic actions which triggers a web request that includes a page load, navigation event, or action that triggers an XHR or Fetch request. Scroll downs, keystrokes, or clicks that don't trigger web requests aren't counted as such actions. Actions are split by the Synthetic Browser Monitors that caused them. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
Third-party results
The number of billed results consumed by third-party monitors.
(DPS) Total Third-Party Synthetic API Ingestion billing usage
The number of synthetic test results pushed into Dynatrace with Synthetic 3rd party API. Use this total metric to query longer timeframes without losing precision or performance.
(DPS) Third-Party Synthetic API Ingestion billing usage per external browser monitor
The number of synthetic test results pushed into Dynatrace with Synthetic 3rd party API. The ingestions are split by external Synthetic Browser Monitors for which the results where ingested. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
Requests
The number of billed requests consumed by HTTP monitors.
(DPS) Total HTTP monitor billing usage
The number of HTTP requests performed during execution of synthetic HTTP monitor. Use this total metric to query longer timeframes without losing precision or performance.
(DPS) HTTP monitor billing usage per HTTP monitor
The number of HTTP requests performed, split by synthetic HTTP monitor. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
ALB number of active connections
ALB number of new connections
ALB number of 4XX errors
ALB number of 5XX errors
ALB number of 4XX target errors
ALB number of 5XX target errors
ALB number of rejected connections
ALB number of target connection errors
ALB number of client TLS negotiation errors
ALB number of processed bytes
ALB number of consumed LCUs
ALB number of requests
ALB target response time
Number of running EC2 instances (ASG)
Number of stopped EC2 instances (ASG)
Number of terminated EC2 instances (ASG)
Number of running EC2 instances (AZ)
Number of stopped EC2 instances (AZ)
Number of terminated EC2 instances (AZ)
DynamoDB read capacity units
DynamoDB write capacity units
DynamoDB provisioned read capacity units
DynamoDB provisioned write capacity units
DynamoDB read capacity units %
DynamoDB write capacity units %
DynamoDB number of requests with HTTP 500 status code
DynamoDB number of requests with HTTP 400 status code
DynamoDB number of successful request latency for operation
DynamoDB number of items returned by operation
DynamoDB number of throttled requests for operation
DynamoDB number of read throttled events
DynamoDB number of write throttled events
Number of tables for AvailabilityZone
EBS volume read latency
EBS volume write latency
EBS volume consumed OPS
EBS volume read OPS
EBS volume write OPS
EBS volume throughput %
EBS volume read throughput
EBS volume write throughput
EBS volume idle time %
EBS volume queue length
EC2 CPU usage %
EC2 instance storage read IOPS
EC2 instance storage read rate
EC2 instance storage write IOPS
EC2 instance storage write rate
EC2 network data received rate
EC2 network data transmitted rate
CLB backend connection errors
CLB number of backend 2XX errors
CLB number of backend 3XX errors
CLB number of backend 4XX errors
CLB number of backend 5XX errors
CLB number of 4XX errors
CLB number of 5XX errors
CLB frontend errors percentage
CLB number of healthy hosts
CLB number of unhealthy hosts
CLB latency
CLB number of completed requests
LambdaFunction concurrent executions count
LambdaFunction code execution time.
LambdaFunction number of failed invocations with HTTP 4XX status code
LambdaFunction rate of failed invocations to all invocations %
LambdaFunction number of times a function is invoked
LambdaFunction provisioned concurrent executions count
LambdaFunction provisioned concurrency invocation count
LambdaFunction provisioned concurrency spillover invocation count
LambdaFunction throttled function invocation count
NLB number of active flows
NLB number of new flows
NLB number of client resets
NLB number of resets
NLB number of target resets
NLB number of processed bytes
NLB number of consumed LCUs
RDS CPU usage %
RDS read latency
RDS write latency
RDS freeable memory
RDS swap usage
RDS network received throughput
RDS network transmitted throughput
RDS read IOPS
RDS write IOPS
RDS read throughput
RDS write throughput
RDS connections
RDS free storage space %
RDS restarts
Failed requests
Other requests
Successful requests
Total requests
Unauthorized requests
Capacity
Duration
Healthy host count
Unhealthy host count
Requests failed
Requests total
Response status
Current connections count
Network throughput
Requests in application queue
Requests in application queue
Function execution count
Function execution units count
HTTP 5xx
IO other operations/s
IO read operations/s
IO write operations/s
IO other bytes/s
IO read bytes/s
IO write bytes/s
Received bytes
Sent bytes
HTTP 2xx
HTTP 403
HTTP 5xx
IO other operations/s
IO read operations/s
IO write operations/s
IO other bytes/s
IO read bytes/s
IO write bytes/s
Response time avg
Received bytes
Sent bytes
Requests count
Available Storage
Data Usage
Document Count
Document Quota
Index Usage
Metadata Requests
Normalized request units consumption
Provisioned Throughput
Replication Latency
Total number of request units
Total number of requests
Service Availability
Capture backlog
Captured bytes
Captured messages
Quota exceeded errors
Server errors
User errors
Incoming requests
Successful requests
Throttled requests
Incoming bytes
Outgoing bytes
Incoming messages
Outgoing messages
Active connections
Closed connections
Opened connections
Commands abandoned
Commands completed
Commands rejected
Connected devices
Number of throttling errors
Total device data usage
Total devices
Messages delivered to the built-in endpoint (messages/events)
Message latency for the built-in endpoint (messages/events)
Messages delivered to Event Hub endpoints
Message latency for event hub endpoints
Dropped messages
Invalid messages
Orphaned messages
Telemetry message send attempts
Telemetry messages sent
Messages matching fallback condition
Message latency for service bus queue endpoints
Messages delivered to service bus queue endpoints
Message latency for service bus topic endpoints
Messages delivered to service bus topic endpoints
Message latency for storage endpoints
Blobs written to storage
Data written to storage
Messages delivered to storage endpoints
Load balancer DIP TCP availability
Load balancer DIP UDP availability
Load Balancer VIP availability
SNAT connections successful
SNAT connections pending
SNAT connections failed
Bytes received
Bytes sent
Packets received
Packets sent
SYN packets received
SYN packets sent
Cache hits
Cache misses
Read bytes/s
Write bytes/s
Get commands
Set commands
Total no. of processed commands
No. of evicted keys
No. of expired keys
Total no. of keys
Used memory
Used memory RSS
Connected clients
Server load
Processor time
Number of starting VMs in region
Number of active VMs in region
Number of stopped VMs in region
Total active connections
Server errors
User errors
Count of messages
Count of active messages
Count of dead-lettered messages
Count of scheduled messages
Incoming messages
Outgoing messages
Incoming requests
Total successful requests
Throttled requests
Service bus premium namespace CPU usage metric
Service bus premium namespace memory usage metric
Service bus size
Server errors
User errors
Count of messages in queue
Count of active messages in a queue
Count of dead-lettered messages in a queue
Count of scheduled messages in a queue
Incoming messages
Outgoing messages
Incoming requests
Total successful requests
Throttled requests
Size of an queue
Server errors
User errors
Count of messages in topic
Count of active messages in a topic
Count of dead-lettered messages in a topic
Count of scheduled messages in a topic
Incoming messages
Outgoing messages
Incoming requests
Total successful requests
Throttled requests
Size of a topic
Blocked by firewall
Failed connections
Successful connections
DTU limit
DTU used
DTU percentage
Data I/O percentage
Log I/O percentage
Database size percentage
Total database size
In-Memory OLTP storage percent
CPU percentage
Deadlocks
Sessions percentage
Workers percentage
Storage limit
Database size percentage
Storage used
In-memory OLTP storage percent
DTU percentage
eDTU limit
eDTU used
Data I/O percentage
Log I/O percentage
CPU percentage
Sessions percentage
Workers percentage
Transactions count
E2E success latency
Server success latency
Egress bytes
Ingress bytes
Blob capacity
Blob container count
Blob count
Transactions count
E2E success latency
Server success latency
Egress bytes
Ingress bytes
File capacity
File share count
File count
Transactions count
E2E success latency
Server success latency
Egress bytes
Ingress bytes
Queue capacity
Queue count
Queue message count
Transactions count
Server success latency
E2E success latency
Egress bytes
Ingress bytes
Table capacity
Table count
Table entity count
Disk read bytes
Disk read operations per sec
Disk write bytes
Disk write operations per sec
Network in bytes
Network out bytes
Percentage CPU
Disk read bytes
Disk read operations per sec
Disk write bytes
Disk write operations per sec
Network in bytes
Network out bytes
Number of starting VMs in scale set
Number of active VMs in scale set
Number of stopped VMs in scale set
Percentage CPU
CF: Time to fetch cell states
The time that the auctioneer took to fetch state from all the cells when running its auction.
CF: App instance placement failures
The number of application instances that the auctioneer failed to place on Diego cells.
CF: App instance starts
The number of application instances that the auctioneer successfully placed on Diego cells.
CF: Task placement failures
The number of tasks that the auctioneer failed to place on Diego cells.
CF: 502 responses
The number of responses that indicate invalid service responses produced by an application.
CF: Response latency
The average response time from the application to clients.
CF: 5xx responses
The number of responses that indicate repeatedly crashing apps or response issues from applications.
CF: Total requests
The number of all requests representing the overall traffic flow.
[Deprecated] Kubernetes: Cluster cores
Total allocatable CPU cores per Kubernetes cluster. Deprecated - use builtin:kubernetes.node.cpu_allocatable instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Cluster CPU available
Total CPU cores available for additional pods per Kubernetes cluster. Deprecated - use builtin:kubernetes.node.cpu_allocatable - builtin:kubernetes.node.requests_cpu instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Cluster CPU available, %
Percent distribution of available CPU relative to total number of cluster cores. Provide an aggregation type to get quantiles. Deprecated - use (builtin:kubernetes.node.cpu_allocatable - builtin:kubernetes.node.requests_cpu)/builtin:kubernetes.node.cpu_allocatable instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Cluster CPU limit
Total CPU limit per Kubernetes cluster. Deprecated - use builtin:kubernetes.workload.limits_cpu or builtin:kubernetes.node.limits_cpu instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Cluster CPU limit, %
Percent distribution of CPU limits relative to total number of cluster cores. Provide an aggregation type to get quantiles. Deprecated - use builtin:kubernetes.workload.limits_cpu or builtin:kubernetes.node.limits_cpu instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Cluster CPU requests
Total CPU requests per Kubernetes cluster. Deprecated - use builtin:kubernetes.workload.requests_cpu or builtin:kubernetes.node.requests_cpu instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Cluster CPU requests, %
Percent distribution of CPU requests relative to total number of cluster cores. Provide an aggregation type to get quantiles. Deprecated - use builtin:kubernetes.workload.requests_cpu or builtin:kubernetes.node.requests_cpu instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Cluster memory
Total allocatable memory per Kubernetes cluster. Deprecated - use builtin:kubernetes.node.memory_allocatable instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Cluster memory available
Total memory available for additional pods per Kubernetes cluster. Deprecated - use builtin:kubernetes.node.memory_allocatable - builtin:kubernetes.node.requests_memory instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Cluster memory available, %
Percent distribution of available memory relative to total cluster memory. Provide an aggregation type to get quantiles. Deprecated - use (builtin:kubernetes.node.memory_allocatable - builtin:kubernetes.node.requests_memory)/builtin:kubernetes.node.memory_allocatable instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Cluster memory limit
Total memory limit per Kubernetes cluster. Deprecated - use builtin:kubernetes.workload.limits_memory or builtin:kubernetes.node.limits_memory instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Cluster memory limits, %
Percent distribution memory limits relative to total cluster memory. Provide an aggregation type to get quantiles. Deprecated - use builtin:kubernetes.workload.limits_memory or builtin:kubernetes.node.limits_memory instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Cluster memory requests
Total memory requests per Kubernetes cluster. Deprecated - use builtin:kubernetes.workload.requests_memory or builtin:kubernetes.node.requests_memory instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Cluster memory requests, %
Percent distribution of memory requests relative to total cluster memory. Provide an aggregation type to get quantiles. Deprecated - use builtin:kubernetes.workload.requests_memory or builtin:kubernetes.node.requests_memory instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Cluster nodes
Total nodes per Kubernetes cluster. Deprecated - use builtin:kubernetes.nodes instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Cluster readyz status
Current status of the Kubernetes API server reported by the /readyz endpoint (0 or 1). Deprecated - use builtin:kubernetes.cluster.readyz instead (requires ActiveGate 1.249).
[Deprecated] Kubernetes: Quota CPU limits, mCores
CPU limits quota per namespace and resource quota name in millicores. Deprecated - builtin:kubernetes.resourcequota.limits_cpu instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Quota CPU requests, mCores
CPU requests quota per namespace and resource quota name in millicores. Deprecated - use builtin:kubernetes.resourcequota.requests_cpu instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Quota memory limits, bytes
Memory limits quota per namespace and resource quota name in bytes. Deprecated - use builtin:kubernetes.resourcequota.limits_memory instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Quota memory requests, bytes
Memory requests quota per namespace and resource quota name in bytes. Deprecated - use builtin:kubernetes.resourcequota.requests_memory instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Quota pod counts
Pods count quota per namespace or resource quota name. Deprecated - use builtin:kubernetes.resourcequota.pods instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Quota CPU limits used, mCores
Used CPU limits quota per namespace or resource quota name in millicores. Deprecated - use builtin:kubernetes.resourcequota.limits_cpu_used instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Quota CPU requests quota used, mCores
Used CPU request quota per namespace or resource quota name in millicores. Deprecated - use builtin:kubernetes.resourcequota.requests_cpu_used instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Quota memory limits used, bytes
Used memory limits quota per namespace or resource quota name in bytes. Deprecated - use builtin:kubernetes.resourcequota.limits_memory_used instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Quota memory requests used, bytes
Used memory requests quota per namespace or resource quota name in bytes. Deprecated - use builtin:kubernetes.resourcequota.requests_memory_used instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Quota pod count used
Used pods count quota per namespace or resource quota name. Deprecated - use builtin:kubernetes.resourcequota.pods_used instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Namespace CPU limits, mCores
Total CPU limits per namespace and workload type in millicores. Deprecated - use builtin:kubernetes.workload.limits_cpu instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Namespace CPU requests, mCores
Total CPU requests per namespace and workload type in millicores. Deprecated - use builtin:kubernetes.workload.requests_cpu instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Namespace desired pods
Number of desired pods per namespace and workload type. Deprecated - use builtin:kubernetes.workload.pods_desired instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Namespace memory limits, bytes
Total of memory limits per namespace and workload type in bytes. Deprecated - use builtin:kubernetes.workload.limits_memory instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Namespace memory requests, bytes
Total of memory requests per namespace and workload type in bytes. Deprecated - use builtin:kubernetes.workload.requests_memory instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Namespace running pods
Number of running pods per namespace and workload type. Deprecated - use builtin:kubernetes.pods instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Namespace workloads
Number of workloads per namespace and workload type. Deprecated - use builtin:kubernetes.workloads instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Node conditions
Health status of a Kubernetes node. Deprecated - use builtin:kubernetes.node.conditions instead (requires ActiveGate 1.249).
[Deprecated] Kubernetes: Node cores
Total allocatable CPU cores per Kubernetes node. Deprecated - use builtin:kubernetes.node.cpu_allocatable instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Node CPU available
Total CPU cores available for additional pods per Kubernetes node. Deprecated - use builtin:kubernetes.node.cpu_allocatable - builtin:kubernetes.node.requests_cpu instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Node CPU limit
Total CPU limit per Kubernetes node. Deprecated - use builtin:kubernetes.node.limits_cpu instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Node CPU requests
Total CPU requests per Kubernetes node. Deprecated - use builtin:kubernetes.node.requests_cpu instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Node memory
Total allocatable memory per Kubernetes node. Deprecated - use builtin:kubernetes.node.memory_allocatable instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Node memory available
Total memory available for additional pods per Kubernetes node. Deprecated - use builtin:kubernetes.node.memory_allocatable - builtin:kubernetes.node.requests_memory instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Node memory limit
Total memory limit per Kubernetes node. Deprecated - use builtin:kubernetes.node.limits_memory instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Node memory requests
Total memory requests per Kubernetes node. Deprecated - use builtin:kubernetes.node.requests_memory instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Container restarts per pod
Number of container restarts within a pod. The metric is only written if there was at least one container restart. Use the transformer :default(0) in your metric selector to work with missing values. Deprecated - use builtin:kubernetes.container.restarts instead (requires ActiveGate 1.247).
[Deprecated] Kubernetes: Containers per workload
Number of containers per workload, split by container state. Deprecated - use builtin:kubernetes.containers instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Pod CPU limits, mCores
CPU limits per pod in millicores. Deprecated - use builtin:kubernetes.workload.limits_cpu instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Pod CPU requests, mCores
CPU requests per pod in millicores. Deprecated - use builtin:kubernetes.workload.requests_cpu instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Containers - desired containers per workload
Number of desired containers per workload. Deprecated - use builtin:kubernetes.workload.containers_desired instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Pod memory limits, bytes
Memory limits per pod in bytes. Deprecated - use builtin:kubernetes.workload.limits_memory instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Pod memory requests, bytes
Memory requests per pod in bytes. Deprecated - use builtin:kubernetes.workload.requests_memory instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Workloads - desired pods per workload
Number of desired pods per workload. Deprecated - use builtin:kubernetes.workload.pods_desired instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Pod counts
Number of pods per workload and phase. Deprecated - use builtin:kubernetes.pods instead (requires ActiveGate 1.245).
[Deprecated] Kubernetes: Workloads - running pods per workload
Number of running pods per workload. Deprecated - use builtin:kubernetes.pods instead (requires ActiveGate 1.245).
CPU usage
Disk allocation
Disk capacity
Memory resident
Memory usage
Network incoming bytes rate
Network outgoing bytes rate
Host CPU usage %
Host disk usage rate
Host disk commands aborted
Host disk queue latency
Host disk read IOPS
Host disk read latency
Host disk read rate
Host disk write IOPS
Host disk write latency
Host disk write rate
Host compression rate
Host memory consumed
Host decompression rate
Host swap in rate
Host swap out rate
Host network data received rate
Host network data transmitted rate
Data received rate
Data transmitted rate
Packets received dropped
Packets transmitted dropped
Number of VMs
Number of VMs powered-off
Number of VMs suspended
Host availability %
VM CPU ready %
VM swap wait
VM CPU usage MHz
VM CPU usage %
VM disk usage rate
VM memory active
VM compression rate
VM memory consumed
VM decompression rate
VM swap in rate
VM swap out rate
VM network data received rate
VM network data transmitted rate
Containers: CPU limit, mCores
CPU resource limit per container in millicores.
Containers: CPU logical cores
Number of logical CPU cores of the host.
Containers: CPU shares
Number of CPU shares allocated per container.
Containers: CPU throttling, mCores
CPU throttling per container in millicores.
Containers: CPU throttled time, ns/min
Total amount of time a container has been throttled, in nanoseconds per minute.
Containers: CPU usage, mCores
CPU usage per container in millicores
Containers: CPU usage, % of limit
Percent CPU usage per container relative to CPU resource limit. Logical cores are used if CPU limit isn't set.
Containers: CPU system usage, mCores
CPU system usage per container in millicores.
Containers: CPU system usage time, ns/min
Used system time per container in nanoseconds per minute.
Containers: CPU usage time, ns/min
Sum of used system and user time per container in nanoseconds per minute.
Containers: CPU user usage, mCores
CPU user usage per container in millicores.
Containers: CPU user usage time, ns/min
Used user time per container in nanoseconds per minute.
Containers: Memory cache, bytes
Page cache memory per container in bytes.
Containers: Memory limit, bytes
Memory limit per container in bytes. If no limit is set, this is an empty value.
Containers: Memory limit, % of physical memory
Percent memory limit per container relative to total physical memory. If no limit is set, this is an empty value.
Containers: Memory - out of memory kills
Number of out of memory kills for a container.
Containers: Memory - total physical memory, bytes
Total physical memory on the host in bytes.
Containers: Memory usage, bytes
Resident set size (Linux) or private working set size (Windows) per container in bytes.
Containers: Memory usage, % of limit
Resident set size (Linux) or private working set size (Windows) per container in percent relative to container memory limit. If no limit is set, this equals total physical memory.
Container bytes received
Container bytes transmitted
Container cpu usage
Devicemapper data space available
Devicemapper data space used
Devicemapper meta-data space available
Devicemapper meta-data space used
Memory percent
Container memory usage
Number of containers launched
Number of containers running
Number of containers running
Number of containers terminated
Container throttled time
Dashboard view count
Host availability
Host availability state metric reported in 1 minute intervals
z/OS General CPU usage
The percent of the general-purpose central processor (GCP) used
z/OS Rolling 4 hour MSU average
The 4h average of consumed million service units on this LPAR
z/OS MSU capacity
The over all capacity of million service units on this LPAR
z/OS zIIP eligible time
The zIIP eligible time spent on the general-purpose central processor (GCP) after process start per minute
AIX Entitlement configured
Capacity Entitlement is the number of virtual processors assigned to the AIX partition. It’s measured in fractions of processor equal to 0.1 or 0.01. For more information about entitlement, see Assigning the appropriate processor entitled capacity in official IMB documentation.
AIX Entitlement used
Percentage of entitlement used. Capacity Entitlement is the number of virtual cores assigned to the AIX partition. See For more information about entitlement, see Assigning the appropriate processor entitled capacity in official IMB documentation.
CPU idle
Average CPU time, when the CPU didn't have anything to do
CPU I/O wait
Percentage of time when CPU was idle during which the system had an outstanding I/O request. It is not available on Windows.
System load
The average number of processes that are being executed by CPU or waiting to be executed by CPU over the last minute
System load15m
The average number of processes that are being executed by CPU or waiting to be executed by CPU over the last 15 minutes
System load5m
The average number of processes that are being executed by CPU or waiting to be executed by CPU over the last 5 minutes
CPU other
Average CPU time spent on other tasks: servicing interrupt requests (IRQ), running virtual machines under the control of the host's kernel (It means the host is a hypervisor for VMs). It's available only for Linux hosts
AIX Physical consumed
Total CPUs consumed by the AIX partition
CPU steal
Average CPU time, when a virtual machine waits to get CPU cycles from the hypervisor. In a virtual environment, CPU cycles are shared across virtual machines on the hypervisor server. If your virtualized host displays a high CPU steal, it means CPU cycles are being taken away from your virtual machine to serve other purposes. It may indicate an overloaded hypervisor. It's available only for Linux hosts
CPU system
Average CPU time when CPU was running in kernel mode
CPU usage %
Percentage of CPU time when CPU was utilized. A value close to 100% means most host processing resources are in use, and host CPUs can’t handle additional work
CPU user
Average CPU time when CPU was running in user mode
Number of DNS errors by type
The number of DNS errors by type
Number of orphaned DNS responses
The number of orphaned DNS responses on the host
Number of DNS queries
The number of DNS queries on the host
DNS query time sum
The time of all DNS queries on the host
DNS query time
The average time of DNS query. Calculated with DNS query time sum divided by number of DNS queries for each host and dns server pair.
DNS query time by DNS server
The weighted average time of DNS query by DNS server ip. Calculated with DNS query time sum divided by number of DNS queries. It weights the result taking into account number of requests from each host.
DNS query time on host
The weighted average time of DNS query on a host. Calculated with DNS query time sum divided by number of DNS queries on a host. It weights the result taking into account number of requests to each dns server
Disk throughput read
File system read throughput in bits per second
Disk throughput write
File system write throughput in bits per second
Disk available
Amount of free space available for user in file system. On Linux and AIX it is free space available for unprivileged user. It doesn't contain part of free space reserved for the root.
Disk read bytes per second
Speed of read from file system in bytes per second
Disk write bytes per second
Speed of write to file system in bytes per second
Disk available %
Percentage of free space available for user in file system. On Linux and AIX it is % of free space available for unprivileged user. It doesn't contain part of free space reserved for the root.
Inodes available %
Percentage of free inodes available for unprivileged user in file system. Metric not available on Windows.
Inodes total
Total amount of inodes available for unprivileged user in file system. Metric not available on Windows.
Disk average queue length
Average number of read and write operations in disk queue
Disk read operations per second
Number of read operations from file system per second
Disk read time
Average time of read from file system. It shows average disk latency during read.
Disk used
Amount of used space in file system
Disk used %
Percentage of used space in file system
Disk utilization time
Percent of time spent on disk I/O operations
Disk write operations per second
Number of write operations to file system per second
Disk write time
Average time of write to file system. It shows average disk latency during write.
File descriptors max
Maximum amount of file descriptors for use
File descriptors used
Amount of file descriptors used
AIX Kernel threads blocked
Length of the swap queue. The swap queue contains the threads ready to run but swapped out with the currently running threads
AIX Kernel threads I/O event wait
Number of threads that are waiting for file system direct (cio) + Number of processes that are asleep waiting for buffered I/O
AIX Kernel threads I/O message wait
Number of threads that are sleeping and waiting for raw I/O operations at a particular time. Raw I/O operation allows applications to direct write to the Logical Volume Manager (LVM) layer
AIX Kernel threads runnable
Number of runnable threads (running or waiting for run time) (threads ready). The average number of runnable threads is seen in the first column of the vmstat command output
Memory available
The amount of memory (RAM) available on the host. The memory that is available for allocation to new or existing processes. Available memory is an estimation of how much memory is available for use without swapping.
Memory available %
The percentage of memory (RAM) available on the host. The memory that is available for allocation to new or existing processes. Available memory is an estimation of how much memory is available for use without swapping. Shows available memory as percentages.
Page faults per second
The measure of the number of page faults per second on the monitored host. This value includes soft faults and hard faults.
Swap available
The amount of swap memory or swap space (also known as paging, which is the on-disk component of the virtual memory system) available.
Swap total
Amount of total swap memory or total swap space (also known as paging, which is the on-disk component of the virtual memory system) for use.
Swap used
The amount of swap memory or swap space (also known as paging, which is the on-disk component of the virtual memory system) used.
Kernel memory
The memory used by the system kernel. It includes memory used by core components of OS along with any device drivers.Typically, the number will be very small.
Memory reclaimable
The memory usage for specific purposes. Reclaimable memory is calculated as available memory (estimation of how much memory is available for use without swapping) minus free memory (amount of memory that is currently not used for anything). For more information on reclaimable memory, see this blog post.
Memory total
The amount of memory (RAM) installed on the system.
Memory used %
Shows percentage of memory currently used. Used memory is calculated by OneAgent as follows: used = total – available. So the used memory metric displayed in Dynatrace analysis views is not equal to the used memory metric displayed by system tools. At the same time, it’s important to remember that system tools report used memory the way they do due to historical reasons, and that this particular method of calculating used memory isn’t really representative of how the Linux kernel manages memory in modern systems. The difference in these measurements is in fact quite significant, too. Note: Calculated by taking 100% - "Memory available %".
Memory used
Used memory is calculated by OneAgent as follows: used = total – available. So the used memory metric displayed in Dynatrace analysis views is not equal to the used memory metric displayed by system tools. At the same time, it’s important to remember that system tools report used memory the way they do due to historical reasons, and that this particular method of calculating used memory isn’t really representative of how the Linux kernel manages memory in modern systems. The difference in these measurements is in fact quite significant, too.
NIC packets dropped
Network interface packets dropped on the host
NIC received packets dropped
Network interface received packets dropped on the host
NIC sent packets dropped
Network interface sent packets dropped on the host
NIC packet errors
Network interface packet errors on the host
NIC received packet errors
Network interface received packet errors on a host
NIC sent packet errors
Network interface sent packet errors on the host
NIC packets received
Network interface packets received on the host
NIC packets sent
Network interface packets sent on the host
NIC bytes received
Network interface bytes received on the host
NIC bytes sent on host
Network interface bytes sent on the host
NIC connectivity
Network interface connectivity on the host
NIC receive link utilization
Network interface receive link utilization on the host
NIC transmit link utilization
Network interface transmit link utilization on the host
NIC retransmission
Network interface retransmission on the host
NIC received packets retransmission
Network interface retransmission for received packets on the host
NIC sent packets retransmission
Network interface retransmission for sent packets on the host
Traffic
Network traffic on the host
Traffic in
Traffic incoming at the host
Traffic out
Traffic outgoing from the host
Host retransmission base received
Host aggregated process retransmission base received per second
Host retransmission base sent
Host aggregated process retransmission base sent per second
Host retransmitted packets received
Host aggregated process retransmitted packets received per second
Host retransmitted packets sent
Host aggregated process retransmitted packets sent per second
Localhost session reset received
Host aggregated session reset received per second on localhost
Localhost session timeout received
Host aggregated session timeout received per second on localhost
Localhost new session received
Host aggregated new session received per second on localhost
Host session reset received
Host aggregated process session reset received per second
Host session timeout received
Host aggregated process session timeout received per second
Host new session received
Host aggregated process new session received per second
Host bytes received
Host aggregated process bytes received per second
Host bytes sent
Host aggregated process bytes sent per second
OS Service availability
This metric provides the status of the OS service. If the OS service is running, the OS module is reporting "1" as a value of the metric. In any other case, the metric has a value of "0"Note that this metric provides data only from Classic Windows services monitoring (supported only on Windows), currently replaced by the new OS Services monitoring. To learn more, see Classic Windows services monitoring.
OS Process count
This metric shows an average number of processes, over one minute, running on the host. The reported number of processes is based on processes detected by the OS module, read in 10 seconds cycles.
PGI count
This metric shows the number of PGIs created by the OS module every minute. It includes every PGI, even those which are considered not important and are not reported to Dynatrace.
Reported PGI count
This metric shows the number of PGIs created and reported by the OS module every minute. It includes only PGIs, which are considered important and reported to Dynatrace. Important PGIs are those in which OneAgent recognizes the technology, have open network ports, generate significant resource usage, or are created via Declarative process grouping rules. To learn what makes process important, see Which are the most important processes?
z/OS General CPU time
Total General CPU time per minute
z/OS Consumed MSUs per SMF interval (SMF70EDT)
Number of consumed MSUs per SMF interval (SMF70EDT)
z/OS zIIP time
Total zIIP time per minute
z/OS zIIP usage
Actively used zIIP as a percentage of available zIIP
Host availability %
Host availability %
Host uptime
Time since last host boot up. Requires OneAgent 1.259+. The metric is not supported for application-only OneAgent deployments.
Kubernetes: Cluster readyz status
Current status of the Kubernetes API server reported by the /readyz endpoint (0 or 1).
Kubernetes: Container - out of memory (OOM) kill count
This metric measures the out of memory (OOM) kills. The most detailed level of aggregation is container. The value corresponds to the status 'OOMKilled' of a container in the pod resource's container status. The metric is only written if there was at least one container OOM kill.
Kubernetes: Container - restart count
This metric measures the amount of container restarts. The most detailed level of aggregation is container. The value corresponds to the delta of the 'restartCount' defined in the pod resource's container status. The metric is only written if there was at least one container restart.
Kubernetes: Node conditions
This metric describes the status of a Kubernetes node. The most detailed level of aggregation is node.
Kubernetes: Node - CPU allocatable
This metric measures the total allocatable cpu. The most detailed level of aggregation is node. The value corresponds to the allocatable cpu of a node.
Kubernetes: Container - CPU throttled (by node)
This metric measures the total CPU throttling by container. The most detailed level of aggregation is node.
Kubernetes: Container - CPU usage (by node)
This metric measures the total CPU consumed (user usage + system usage) by container. The most detailed level of aggregation is node.
Kubernetes: Pod - CPU limits (by node)
This metric measures the cpu limits. The most detailed level of aggregation is node. The value is the sum of the cpu limits of all app containers of a pod.
Kubernetes: Pod - memory limits (by node)
This metric measures the memory limits. The most detailed level of aggregation is node. The value is the sum of the memory limits of all app containers of a pod.
Kubernetes: Node - memory allocatable
This metric measures the total allocatable memory. The most detailed level of aggregation is node. The value corresponds to the allocatable memory of a node.
Kubernetes: Container - Working set memory (by node)
This metric measures the current working set memory (memory that cannot be reclaimed under pressure) by container. The OOM Killer is invoked if the working set exceeds the limit. The most detailed level of aggregation is node.
Kubernetes: Pod count (by node)
This metric measures the number of pods. The most detailed level of aggregation is node. The value corresponds to the count of all pods.
Kubernetes: Node - pod allocatable count
This metric measures the total number of allocatable pods. The most detailed level of aggregation is node. The value corresponds to the allocatable pods of a node.
Kubernetes: Pod - CPU requests (by node)
This metric measures the cpu requests. The most detailed level of aggregation is node. The value is the sum of the cpu requests of all app containers of a pod.
Kubernetes: Pod - memory requests (by node)
This metric measures the memory requests. The most detailed level of aggregation is node. The value is the sum of the memory requests of all app containers of a pod.
Kubernetes: PVC - available
This metric measures the number of available bytes in the volume. The most detailed level of aggregation is persistent volume claim.
Kubernetes: PVC - capacity
This metric measures the capacity in bytes of the volume. The most detailed level of aggregation is persistent volume claim.
Kubernetes: PVC - used
This metric measures the number of used bytes in the volume. The most detailed level of aggregation is persistent volume claim.
Kubernetes: Resource quota - CPU limits
This metric measures the cpu limit quota. The most detailed level of aggregation is resource quota. The value corresponds to the cpu limits of a resource quota.
Kubernetes: Resource quota - CPU limits used
This metric measures the used cpu limit quota. The most detailed level of aggregation is resource quota. The value corresponds to the used cpu limits of a resource quota.
Kubernetes: Resource quota - memory limits
This metric measures the memory limit quota. The most detailed level of aggregation is resource quota. The value corresponds to the memory limits of a resource quota.
Kubernetes: Resource quota - memory limits used
This metric measures the used memory limits quota. The most detailed level of aggregation is resource quota. The value corresponds to the used memory limits of a resource quota.
Kubernetes: Resource quota - pod count
This metric measures the pods quota. The most detailed level of aggregation is resource quota. The value corresponds to the pods of a resource quota.
Kubernetes: Resource quota - pod used count
This metric measures the used pods quota. The most detailed level of aggregation is resource quota. The value corresponds to the used pods of a resource quota.
Kubernetes: Resource quota - CPU requests
This metric measures the cpu requests quota. The most detailed level of aggregation is resource quota. The value corresponds to the cpu requests of a resource quota.
Kubernetes: Resource quota - CPU requests used
This metric measures the used cpu requests quota. The most detailed level of aggregation is resource quota. The value corresponds to the used cpu requests of a resource quota.
Kubernetes: Resource quota - memory requests
This metric measures the memory requests quota. The most detailed level of aggregation is resource quota. The value corresponds to the memory requests of a resource quota.
Kubernetes: Resource quota - memory requests used
This metric measures the used memory requests quota. The most detailed level of aggregation is resource quota. The value corresponds to the used memory requests of a resource quota.
Kubernetes: Workload conditions
This metric describes the status of a Kubernetes workload. The most detailed level of aggregation is workload.
Kubernetes: Pod - desired container count
This metric measures the number of desired containers. The most detailed level of aggregation is workload. The value is the count of all containers in the pod's specification.
Kubernetes: Container - CPU throttled (by workload)
This metric measures the total CPU throttling by container. The most detailed level of aggregation is workload.
Kubernetes: Container - CPU usage (by workload)
This metric measures the total CPU consumed (user usage + system usage) by container. The most detailed level of aggregation is workload.
Kubernetes: Pod - CPU limits (by workload)
This metric measures the cpu limits. The most detailed level of aggregation is workload. The value is the sum of the cpu limits of all app containers of a pod.
Kubernetes: Pod - memory limits (by workload)
This metric measures the memory limits. The most detailed level of aggregation is workload. The value is the sum of the memory limits of all app containers of a pod.
[Deprecated] Kubernetes: Container - Memory RSS (by workload)
This metric measures the true resident set size (RSS) by container. RSS is the amount of physical memory used by the container's cgroup - either total_rss + total_mapped_file (cgroup v1) or anon + file_mapped (cgroup v2). The most detailed level of aggregation is workload. Deprecated - use builtin:kubernetes.workload.memory_working_set instead.
Kubernetes: Container - Working set memory (by workload)
This metric measures the current working set memory (memory that cannot be reclaimed under pressure) by container. The OOM Killer is invoked if the working set exceeds the limit. The most detailed level of aggregation is workload.
Kubernetes: Workload - desired pod count
This metric measures the number of desired pods. The most detailed level of aggregation is workload. The value corresponds to the 'replicas' defined in a deployment resource and to the 'desiredNumberScheduled' for a daemon set resource's status as example.
Kubernetes: Pod - CPU requests (by workload)
This metric measures the cpu requests. The most detailed level of aggregation is workload. The value is the sum of the cpu requests of all app containers of a pod.
Kubernetes: Pod - memory requests (by workload)
This metric measures the memory requests. The most detailed level of aggregation is workload. The value is the sum of the memory requests of all app containers of a pod.
Kubernetes: Container count
This metric measures the number of containers. The most detailed level of aggregation is workload. The metric counts the number of all containers.
Kubernetes: Event count
This metric counts Kubernetes events. The most detailed level of aggregation is the event reason. The value corresponds to the count of events returned by the Kubernetes events endpoint. This metric depends on Kubernetes event monitoring. It will not show any datapoints for the period in which event monitoring is deactivated.
Kubernetes: Node count
This metric measures the number of nodes. The most detailed level of aggregation is cluster. The value is the count of all nodes.
Kubernetes: Pod count (by workload)
This metric measures the number of pods. The most detailed level of aggregation is workload. The value corresponds to the count of all pods.
Kubernetes: Workload count
This metric measures the number of workloads. The most detailed level of aggregation is namespace. The value corresponds to the count of all workloads.
OS Service availability
This metric provides the detailed OS-specific state of the OS service. It is sent once per minute with a 10-second granularity - six samples are aggregated every minute. Status dimension (dt.osservice.status) and a numeric metric value represent the state of the OS service. Values of the status dimension are OS-dependent (for example, "active" for Linux and "running" for Windows). The metric value represents the number of times the OS service was in a specific state during each minute. If the service had the same status in every 10-second sample, the metric value would be 6. However, value is only sent if the service was in a particular state during any 10-second sample. For example, on Linux, if the service ran for a minute, the metric would have a value of 6 and "active" as a value of the status dimension. However, if it were inactive for a minute, it would also have a value of 6, but the value of the status dimension would change to "inactive". Other available dimensions include startup type (dt.osservice.startup_type), alerting status for the OS service (dt.osservice.alerting), display name (dt.osservice.display_name), manufacturer (dt.osservice.manufacturer), name (dt.osservice.name), executable path (dt.osservice.path) and hostname (host.name). There are also host (dt.entity.host) and OS Service (dt.entity.os:service) entities. Windows and Linux operating systems are supported. To learn more, see OS Services monitoring.
Process availability
Process availability state metric reported in 1 minute intervals
Process availability %
This metric provides the percentage of time when a process is available. It is sent once per minute with a 10-second granularity - six samples are aggregated every minute. If the process is available for a whole minute, the value is 100%. A 0% value indicates that it is not running. It has a "Process" dimension (dt.entity.process_group_instance).
Process traffic in
This metric provides size of incoming traffic of a process. It helps to identify processes generating high network traffic on a host. The result is expressed in kilobytes. It has a "PID" (process.pid), "Parent PID" (process.parent_pid), "process owner" (process.owner), "process executable name" (process.executable.name), "process executable path" (process.executable.path), "process command line" (process.command_line) and "Process group instance" (dt.entity.process_group_instance) dimensions This metric is collected only if the Process instance snapshot feature is turned on and triggered, and the time this metric is collected for is restricted to feature limits. To learn more, see Process instance snapshots.
Process traffic out
This metric provides size of outgoing traffic of a process. It helps to identify processes generating high network traffic on a host. The result is expressed in kilobytes. It has a "PID" (process.pid), "Parent PID" (process.parent_pid), "process owner" (process.owner), "process executable name" (process.executable.name), "process executable path" (process.executable.path), "process command line" (process.command_line) and "Process group instance" (dt.entity.process_group_instance) dimensions This metric is collected only if the Process instance snapshot feature is turned on and triggered, and the time this metric is collected for is restricted to feature limits. To learn more, see Process instance snapshots.
Process average CPU
This metric provides the percentage of the CPU usage of a process. The metric value is the sum of CPU time every process worker uses divided by the total available CPU time. The result is expressed in percentage. A value of 100% indicates that the process uses all available CPU resources of the host. It has a "PID" (process.pid), "Parent PID" (process.parent_pid), "process owner" (process.owner), "process executable name" (process.executable.name), "process executable path" (process.executable.path), "process command line" (process.command_line) and "Process group instance" (dt.entity.process_group_instance) dimensions. This metric is collected only if the Process instance snapshot feature is turned on and triggered, and the time this metric is collected for is restricted to feature limits. To learn more, see Process instance snapshots.
Process memory
This metric provides the memory usage of a process. It helps to identify processes with high memory resource consumption and memory leaks. The result is expressed in bytes. It has a "PID" (process.pid), "Parent PID" (process.parent_pid), "process owner" (process.owner), "process executable name" (process.executable.name), "process executable path" (process.executable.path), "process command line" (process.command_line) and "Process group instance" (dt.entity.process_group_instance) dimensions This metric is collected only if the Process instance snapshot feature is turned on and triggered, and the time this metric is collected for is restricted to feature limits. To learn more, see Process instance snapshots.
Incoming messages
The number of incoming messages on the queue or topic
Outgoing messages
The number of outgoing messages from the queue or topic
New attacks
Number of attacks that were recently created. The metric supports the management zone selector.
New Muted Security Problems (global)
Number of vulnerabilities that were recently muted. The metric value is independent of any configured management zone (and thus global).
New Open Security Problems (global)
Number of vulnerabilities that were recently created. The metric value is independent of any configured management zone (and thus global).
New Open Security Problems (split by Management Zone)
Number of vulnerabilities that were recently created. The metric value is split by management zone.
Open Security Problems (global)
Number of currently open vulnerabilities seen within the last minute. The metric value is independent of any configured management zone (and thus global).
Open Security Problems (split by Management Zone)
Number of currently open vulnerabilities seen within the last minute. The metric value is split by management zone.
New Resolved Security Problems (global)
Number of vulnerabilities that were recently resolved. The metric value is independent of any configured management zone (and thus global).
Vulnerabilities - affected process groups count (global)
Total number of unique affected process groups across all open vulnerabilities per technology. The metric value is independent of any configured management zone (and thus global).
Vulnerabilities - affected not-muted process groups count (global)
Total number of unique affected process groups across all open, unmuted vulnerabilities per technology. The metric value is independent of any configured management zone (and thus global).
Vulnerabilities - affected entities count
Total number of unique affected entities across all open vulnerabilities. The metric supports the management zone selector.
CPU time
CPU time consumed by a key request within a particular request type. Request types classify requests, e.g. Resource requests for static assets like CSS or JS files. To learn how Dynatrace calculates service timings, see Service analysis timings.
Key request CPU time
CPU time consumed by a request type. Request types classify requests, e.g. Resource requests for static assets like CSS or JS files. To learn how Dynatrace calculates service timings, see Service analysis timings.
CPU time
CPU time consumed by a particular request. To learn how Dynatrace calculates service timings, see Service analysis timings.
Service CPU time
CPU time consumed by a particular service. To learn how Dynatrace calculates service timings, see Service analysis timings.
Failed connections
Unsuccessful connection attempts compared to all connection attempts. To learn about database analysis, see Analyze database services.
Connection failure rate
Rate of unsuccessful connection attempts compared to all connection attempts. To learn about database analysis, see Analyze database services.
Successful connections
Total number of database connections successfully established by this service. To learn about database analysis, see Analyze database services.
Connection success rate
Rate of successful connection attempts compared to all connection attempts. To learn about database analysis, see Analyze database services.
Total number of connections
Total number of database connections that were attempted to be established by this service. To learn about database analysis, see Analyze database services.
Number of client side errors
Failed requests for a service measured on client side. To learn about failure detection, see Configure service failure detection.
Failure rate (client side errors)
Number of calls without client side errors
Number of HTTP 5xx errors
HTTP requests with a status code between 500 and 599 for a given key request measured on server side. To learn about failure detection, see Configure service failure detection.
Failure rate (HTTP 5xx errors)
Number of calls without HTTP 5xx errors
Number of HTTP 4xx errors
HTTP requests with a status code between 400 and 499 for a given key request measured on server side. To learn about failure detection, see Configure service failure detection.
Failure rate (HTTP 4xx errors)
Number of calls without HTTP 4xx errors
Number of client side errors
Failed requests for a given request type like dynamic web requests or static web requests measured on client side. To learn about failure detection, see Configure service failure detection.
Failure rate (client side errors)
Number of calls without client side errors
Number of server side errors
Failed requests for a given request type like dynamic web requests or static web requests measured on server side. To learn about failure detection, see Configure service failure detection.
Failure rate (server side errors)
Number of calls without server side errors
Number of any errors
Failed requests rate for a given request type like dynamic web requests or static web requests. To learn about failure detection, see Configure service failure detection.
Failure rate (any errors)
Number of calls without any errors
Number of server side errors
Failed requests for a service measured on server side. To learn about failure detection, see Configure service failure detection.
Failure rate (server side errors)
Number of calls without server side errors
Number of any errors
Failed requests for a service measured on server side or client side. To learn about failure detection, see Configure service failure detection.
Failure rate (any errors)
Number of calls without any errors
Request count - client
Number of requests for a given key request - measured on the client side. This metric is written for each key request. To learn more about key requests, see Monitor key request.
Request count - server
Number of requests for a given key request - measured on the server side. This metric is written for each key request. To learn more about key requests, see Monitor key request.
Request count
Number of requests for a given key request. This metric is written for each key request. To learn more about key requests, see Monitor key request.
CPU per request
CPU time for a given key request. This metric is written for each key request. To learn more about key requests, see Monitor key request.
Service key request CPU time
CPU time for a given key request. This metric is written for each key request. To learn more about key requests, see Monitor key request.
Number of client side errors
Failed requests for a given key request measured on client side. To learn about failure detection, see Configure service failure detection.
Failure rate (client side errors)
Number of calls without client side errors
Number of HTTP 5xx errors
Rate of HTTP requests with a status code between 500 and 599 of a given key request. To learn about failure detection, see Configure service failure detection.
Failure rate (HTTP 5xx errors)
Number of calls without HTTP 5xx errors
Number of HTTP 4xx errors
Rate of HTTP requests with a status code between 400 and 499 of a given key request. To learn about failure detection, see Configure service failure detection.
Failure rate (HTTP 4xx errors)
Number of calls without HTTP 4xx errors
Number of server side errors
Failed requests for a given key request measured on server side. To learn about failure detection, see Configure service failure detection.
Failure rate (server side errors)
Number of calls without server side errors
Client side response time
Response time for a given key request - measured on the client side. This metric is written for each key request. To learn more about key requests, see Monitor key request.
Server side response time
Response time for a given key request - measured on the server side. This metric is written for each request. To learn more about key requests, see Monitor key request.
Key request response time
Response time for a given key request. This metric is written for each key request. To learn more about key requests, see Monitor key request.
Success rate (server side)
Number of calls to databases
Time spent in database calls
IO time
Lock time
Number of calls to other services
Time spent in calls to other services
Total processing time
Total processing time for a given key request. This time includes potential further asynchronous processing. This metric is written for each key request. To learn more about key requests, see Monitor key request.
Wait time
Unified service mesh request count
Number of service mesh requests received by a given service. To learn how Dynatrace detects services, see Service detection and naming.
Unified service mesh request count (by service)
Number of service mesh requests received by a given service. Reduced dimensions for faster charting. To learn how Dynatrace detects services, see Service detection and naming.
Unified service mesh request failure count
Number of failed service mesh requests received by a given service. To learn how Dynatrace detects service failures, see Configure service failure detection.
Unified service mesh request failure count (by service)
Number of failed service mesh requests received by a given service. Reduced dimensions for faster charting. To learn how Dynatrace detects service failures, see Configure service failure detection.
Unified service mesh request response time
Response time of a service mesh ingress measured in microseconds. To learn how Dynatrace calculates service timings, see Service analysis timings.
Unified service mesh request response time (by service)
Response time of a service mesh ingress measured in microseconds. Reduced dimensions for faster charting. To learn how Dynatrace calculates service timings, see Service analysis timings.
Unified service request count
Number of requests received by a given service. To learn how Dynatrace detects and analyzes services, see Services.
Unified service request count (by service, endpoint)
Number of requests received by a given service. Reduced dimensions for faster charting. To learn how Dynatrace detects and analyzes services, see Services.
Unified service request count (by service)
Number of requests received by a given service. Reduced dimensions for faster charting. To learn how Dynatrace detects and analyzes services, see Services.
Unified service failure count
Number of failed requests received by a given service. To learn how Dynatrace detects and analyzes services, see Services.
Unified service failure count (by service, endpoint)
Number of failed requests received by a given service. Reduced dimensions for faster charting. To learn how Dynatrace detects and analyzes services, see Services.
Unified service failure count (by service)
Number of failed requests received by a given service. Reduced dimensions for faster charting. To learn how Dynatrace detects and analyzes services, see Services.
Unified service request response time
Response time of a service measured in microseconds on the server side (server side measurements do not include e.g. proxy and networking times). Response time is the time until a response is sent to a calling application, process or other service. It does not include further asynchronous processing. To learn how Dynatrace calculates service timings, see Service analysis timings.
Unified service request response time (by service, endpoint)
Response time of a service measured in microseconds on the server side. Response time is the time until a response is sent to a calling application, process or other service. It does not include further asynchronous processing. Reduced dimensions for faster charting. To learn how Dynatrace calculates service timings, see Service analysis timings.
Unified service request response time (by service)
Response time of a service measured in microseconds on the server side. Response time is the time until a response is sent to a calling application, process or other service. It does not include further asynchronous processing. Reduced dimensions for faster charting. To learn how Dynatrace calculates service timings, see Service analysis timings.
Request count - client
Number of requests received by a given service - measured on the client side. This metric allows service splittings. To learn how Dynatrace detects and analyzes services, see Services.
Request count - server
Number of requests received by a given service - measured on the server side. This metric allows service splittings. To learn how Dynatrace detects and analyzes services, see Services.
Request count
Number of requests received by a given service. This metric allows service splittings. To learn how Dynatrace detects and analyzes services, see Services.
Client side response time
Response time for a given key request per request type - measured on the client side. This metric is written for each key request. To learn more about key requests, see Monitor key request.
Server side response time
Response time for a given key request per request type - measured on the server side. This metric is written for each key request. To learn more about key requests, see Monitor key request.
Client side response time
Server side response time
Response time
Time consumed by a particular service until a response is sent back to the calling application, process, service etc.To learn how Dynatrace calculates service timings, see Service analysis timings.
Success rate (server side)
Total processing time
Total time consumed by a particular request type including asynchronous processing. Time includes the factor that asynchronous processing can still occur after responses are sent. To learn how Dynatrace calculates service timings, see Service analysis timings.
Total processing time
Total time consumed by a particular service including asynchronous processing. Time includes the factor that asynchronous processing can still occur after responses are sent.To learn how Dynatrace calculates service timings, see Service analysis timings.
Number of calls to databases
Time spent in database calls
IO time
Lock time
Number of calls to other services
Time spent in calls to other services
Wait time
Action duration - custom action [browser monitor]
The duration of custom actions; split by monitor.
Action duration - custom action (by geolocation) [browser monitor]
The duration of custom actions; split by monitor, geolocation.
Action duration - load action [browser monitor]
The duration of load actions; split by monitor.
Action duration - load action (by geolocation) [browser monitor]
The duration of load actions; split by monitor, geolocation.
Action duration - XHR action [browser monitor]
The duration of XHR actions; split by monitor.
Action duration - XHR action (by geolocation) [browser monitor]
The duration of XHR actions; split by monitor, geolocation.
Availability rate (by location) [browser monitor]
The availability rate of browser monitors.
Availability rate - excl. maintenance windows (by location) [browser monitor]
The availability rate of browser monitors excluding maintenance windows.
Cumulative layout shift - load action [browser monitor]
The score measuring the unexpected shifting of visible webpage elements. Calculated for load actions; split by monitor.
Cumulative layout shift - load action (by geolocation) [browser monitor]
The score measuring the unexpected shifting of visible webpage elements. Calculated for load actions; split by monitor, geolocation.
DOM interactive - load action [browser monitor]
The time taken until a page's status is set to "interactive" and it's ready to receive input. Calculated for load actions; split by monitor
DOM interactive - load action (by geolocation) [browser monitor]
The time taken until a page's status is set to "interactive" and it's ready to receive input. Calculated for load actions; split by monitor, geolocation
Error details (by error code) [browser monitor]
The number of detected errors; split by monitor, error code.
Error details (by geolocation, error code) [browser monitor]
The number of detected errors; split by monitor executions.
Action duration - custom action (by event) [browser monitor]
The duration of custom actions; split by event.
Action duration - custom action (by event, geolocation) [browser monitor]
The duration of custom actions; split by event, geolocation.
Action duration - load action (by event) [browser monitor]
The duration of load actions; split by event.
Action duration - load action (by event, geolocation) [browser monitor]
The duration of load actions; split by event, geolocation.
Action duration - XHR action (by event) [browser monitor]
The duration of XHR actions; split by event.
Action duration - XHR action (by event, geolocation) [browser monitor]
The duration of XHR actions; split by event, geolocation.
Cumulative layout shift - load action (by event) [browser monitor]
The score measuring the unexpected shifting of visible webpage elements. Calculated for load actions; split by event.
Cumulative layout shift - load action (by event, geolocation) [browser monitor]
The score measuring the unexpected shifting of visible webpage elements. Calculated for load actions; split by event, geolocation.
DOM interactive - load action (by event) [browser monitor]
The time taken until a page's status is set to "interactive" and it's ready to receive input. Calculated for load actions; split by event
DOM interactive - load action (by event, geolocation) [browser monitor]
The time taken until a page's status is set to "interactive" and it's ready to receive input. Calculated for load actions; split by event, geolocation
Error details (by event, error code) [browser monitor]
The number of detected errors; split by event, error code.
Error details (by event, geolocation, error code) [browser monitor]
The number of detected errors; split by event, geolocation, error code.
Failed events count (by event) [browser monitor]
The number of failed monitor events; split by event.
Failed events count (by event, geolocation) [browser monitor]
The number of failed monitor events; split by event, geolocation.
Time to first byte - load action (by event) [browser monitor]
The time taken until the first byte of the response is received from the server, relevant application caches, or a local resource. Calculated for load actions; split by event.
Time to first byte - load action (by event, geolocation) [browser monitor]
The time taken until the first byte of the response is received from the server, relevant application caches, or a local resource. Calculated for load actions; split by event, geolocation.
Time to first byte - XHR action (by event) [browser monitor]
The time taken until the first byte of the response is received from the server, relevant application caches, or a local resource. Calculated for XHR actions; split by event.
Time to first byte - XHR action (by event, geolocation) [browser monitor]
The time taken until the first byte of the response is received from the server, relevant application caches, or a local resource. Calculated for XHR actions; split by event, geolocation.
Largest contentful paint - load action (by event) [browser monitor]
The time taken to render the largest element in the viewport. Calculated for load actions; split by event.
Largest contentful paint - load action (by event, geolocation) [browser monitor]
The time taken to render the largest element in the viewport. Calculated for load actions; split by event, geolocation.
Load event end - load action (by event) [browser monitor]
The time taken to complete the load event of a page. Calculated for load actions; split by event.
Load event end - load action (by event, geolocation) [browser monitor]
The time taken to complete the load event of a page. Calculated for load actions; split by event, geolocation.
Load event start - load action (by event) [browser monitor]
The time taken to begin the load event of a page. Calculated for load actions; split by event.
Load event start - load action (by event, geolocation) [browser monitor]
The time taken to begin the load event of a page. Calculated for load actions; split by event, geolocation.
Network contribution - load action (by event) [browser monitor]
The time taken to request and receive resources (including DNS lookup, redirect, and TCP connect time). Calculated for load actions; split by event.
Network contribution - load action (by event, geolocation) [browser monitor]
The time taken to request and receive resources (including DNS lookup, redirect, and TCP connect time). Calculated for load actions; split by event, geolocation.
Network contribution - XHR action (by event) [browser monitor]
The time taken to request and receive resources (including DNS lookup, redirect, and TCP connect time). Calculated for XHR actions; split by event.
Network contribution - XHR action (by event, geolocation) [browser monitor]
The time taken to request and receive resources (including DNS lookup, redirect, and TCP connect time). Calculated for XHR actions; split by event, geolocation.
Response end - load action (by event) [browser monitor]
(AKA HTML downloaded) The time taken until the user agent receives the last byte of the response or the transport connection is closed, whichever comes first. Calculated for load actions; split by event.
Response end - load action (by event, geolocation) [browser monitor]
(AKA HTML downloaded) The time taken until the user agent receives the last byte of the response or the transport connection is closed, whichever comes first. Calculated for load actions; split by event, geolocation.
Response end - XHR action (by event) [browser monitor]
(AKA HTML downloaded) The time taken until the user agent receives the last byte of the response or the transport connection is closed, whichever comes first. Calculated for XHR actions; split by event.
Response end - XHR action (by event, geolocation) [browser monitor]
(AKA HTML downloaded) The time taken until the user agent receives the last byte of the response or the transport connection is closed, whichever comes first. Calculated for XHR actions; split by event, geolocation.
Server contribution - load action (by event) [browser monitor]
The time spent on server-side processing for a page. Calculated for load actions; split by event.
Server contribution - load action (by event, geolocation) [browser monitor]
The time spent on server-side processing for a page. Calculated for load actions; split by event, geolocation.
Server contribution - XHR action (by event) [browser monitor]
The time spent on server-side processing for a page. Calculated for XHR actions; split by event.
Server contribution - XHR action (by event, geolocation) [browser monitor]
The time spent on server-side processing for a page. Calculated for XHR actions; split by event, geolocation.
Speed index - load action (by event) [browser monitor]
The score measuring how quickly the visible parts of a page are rendered. Calculated for load actions; split by event.
Speed index - load action (by event, geolocation) [browser monitor]
The score measuring how quickly the visible parts of a page are rendered. Calculated for load actions; split by event, geolocation.
Successful events count (by event) [browser monitor]
The number of successful monitor events; split by event.
Successful events count (by event, geolocation) [browser monitor]
The number of successful monitor events; split by event, geolocation.
Total events count (by event) [browser monitor]
The total number of monitor events executions executions; split by event.
Total events count (by event, geolocation) [browser monitor]
The total number of monitor events executions; split by event, geolocation.
Total duration (by event) [browser monitor]
The duration of all actions in an event; split by event.
Total duration (by event, geolocation) [browser monitor]
The duration of all actions in an event; split by event, geolocation.
Visually complete - load action (by event) [browser monitor]
The time taken to fully render content in the viewport. Calculated for load actions; split by event.
Visually complete - load action (by event, geolocation) [browser monitor]
The time taken to fully render content in the viewport. Calculated for load actions; split by event, geolocation.
Visually complete - XHR action (by event) [browser monitor]
The time taken to fully render content in the viewport. Calculated for XHR actions; split by event.
Visually complete - XHR action (by event, geolocation) [browser monitor]
The time taken to fully render content in the viewport. Calculated for XHR actions; split by event, geolocation.
Failed executions count [browser monitor]
The number of failed monitor executions; split by monitor.
Failed executions count (by geolocation) [browser monitor]
The number of failed monitor executions; split by monitor, geolocation.
Time to first byte - load action [browser monitor]
The time taken until the first byte of the response is received from the server, relevant application caches, or a local resource. Calculated for load actions; split by monitor.
Time to first byte - load action (by geolocation) [browser monitor]
The time taken until the first byte of the response is received from the server, relevant application caches, or a local resource. Calculated for load actions; split by monitor, geolocation.
Time to first byte - XHR action [browser monitor]
The time taken until the first byte of the response is received from the server, relevant application caches, or a local resource. Calculated for XHR actions; split by monitor.
Time to first byte - XHR action (by geolocation) [browser monitor]
The time taken until the first byte of the response is received from the server, relevant application caches, or a local resource. Calculated for XHR actions; split by monitor, geolocation.
Largest contentful paint - load action [browser monitor]
The time taken to render the largest element in the viewport. Calculated for load actions; split by monitor.
Largest contentful paint - load action (by geolocation) [browser monitor]
The time taken to render the largest element in the viewport. Calculated for load actions; split by monitor, geolocation.
Load event end - load action [browser monitor]
The time taken to complete the load event of a page. Calculated for load actions; split by monitor.
Load event end - load action (by geolocation) [browser monitor]
The time taken to complete the load event of a page. Calculated for load actions; split by monitor, geolocation.
Load event start - load action [browser monitor]
The time taken to begin the load event of a page. Calculated for load actions; split by monitor.
Load event start - load action (by geolocation) [browser monitor]
The time taken to begin the load event of a page. Calculated for load actions; split by monitor, geolocation.
Network contribution - load action [browser monitor]
The time taken to request and receive resources (including DNS lookup, redirect, and TCP connect time). Calculated for load actions; split by monitor.
Network contribution - load action (by geolocation) [browser monitor]
The time taken to request and receive resources (including DNS lookup, redirect, and TCP connect time). Calculated for load actions; split by monitor, geolocation.
Network contribution - XHR action [browser monitor]
The time taken to request and receive resources (including DNS lookup, redirect, and TCP connect time). Calculated for XHR actions; split by monitor.
Network contribution - XHR action (by geolocation) [browser monitor]
The time taken to request and receive resources (including DNS lookup, redirect, and TCP connect time). Calculated for XHR actions; split by monitor, geolocation.
Response end - load action [browser monitor]
(AKA HTML downloaded) The time taken until the user agent receives the last byte of the response or the transport connection is closed, whichever comes first. Calculated for load actions; split by monitor.
Response end - load action (by geolocation) [browser monitor]
(AKA HTML downloaded) The time taken until the user agent receives the last byte of the response or the transport connection is closed, whichever comes first. Calculated for load actions; split by monitor, geolocation.
Response end - XHR action [browser monitor]
(AKA HTML downloaded) The time taken until the user agent receives the last byte of the response or the transport connection is closed, whichever comes first. Calculated for XHR actions; split by monitor.
Response end - XHR action (by geolocation) [browser monitor]
(AKA HTML downloaded) The time taken until the user agent receives the last byte of the response or the transport connection is closed, whichever comes first. Calculated for XHR actions; split by monitor, geolocation.
Server contribution - load action [browser monitor]
The time spent on server-side processing for a page. Calculated for load actions; split by monitor.
Server contribution - load action (by geolocation) [browser monitor]
The time spent on server-side processing for a page. Calculated for load actions; split by monitor, geolocation.
Server contribution - XHR action [browser monitor]
The time spent on server-side processing for a page. Calculated for XHR actions; split by monitor.
Server contribution - XHR action (by geolocation) [browser monitor]
The time spent on server-side processing for a page. Calculated for XHR actions; split by monitor, geolocation.
Speed index - load action [browser monitor]
The score measuring how quickly the visible parts of a page are rendered. Calculated for load actions; split by monitor.
Speed index - load action (by geolocation) [browser monitor]
The score measuring how quickly the visible parts of a page are rendered. Calculated for load actions; split by monitor, geolocation.
Successful executions count [browser monitor]
The number of successful monitor executions; split by monitor.
Successful executions count (by geolocation) [browser monitor]
The number of successful monitor executions; split by monitor, geolocation.
Total executions count [browser monitor]
The total number of monitor executions executions; split by monitor.
Total executions count (by geolocation) [browser monitor]
The total number of monitor executions executions; split by monitor, geolocation.
Total duration [browser monitor]
The duration of all actions in an event; split by monitor.
Total duration (by geolocation) [browser monitor]
The duration of all actions in an event; split by monitor, geolocation.
Visually complete - load action [browser monitor]
The time taken to fully render content in the viewport. Calculated for load actions; split by monitor.
Visually complete - load action (by geolocation) [browser monitor]
The time taken to fully render content in the viewport. Calculated for load actions; split by monitor, geolocation.
Visually complete - XHR action [browser monitor]
The time taken to fully render content in the viewport. Calculated for XHR actions; split by monitor.
Visually complete - XHR action (by geolocation) [browser monitor]
The time taken to fully render content in the viewport. Calculated for XHR actions; split by monitor, geolocation.
Availability rate (by location) [HTTP monitor]
The availability rate of HTTP monitors.
Availability rate - excl. maintenance windows (by location) [HTTP monitor]
The availability rate of HTTP monitors excluding maintenance windows.
DNS lookup time (by location) [HTTP monitor]
The time taken to resolve the hostname for a target URL for the sum of all requests.
Duration (by location) [HTTP monitor]
The duration of the sum of all requests.
Execution count (by status) [HTTP monitor]
The number of monitor executions.
DNS lookup time (by request, location) [HTTP monitor]
The time taken to resolve the hostname for a target URL for individual HTTP requests.
Duration (by request, location) [HTTP monitor]
The duration of individual HTTP requests.
Response size (by request, location) [HTTP monitor]
The response size of individual HTTP requests.
TCP connect time (by request, location) [HTTP monitor]
The time taken to establish the TCP connection to the server (including SSL) for individual HTTP requests.
Time to first byte (by request, location) [HTTP monitor]
The time taken until the first byte of the response is received from the server, relevant application caches, or a local resource. Calculated for individual HTTP requests.
TLS handshake time (by request, location) [HTTP monitor]
The time taken to complete the TLS handshake for individual HTTP requests.
Duration threshold (request) (by request) [HTTP monitor]
The performance threshold for individual HTTP requests.
Result status count (by request, location) [HTTP monitor]
The number of request executions with success/failure result status.
Status code count (by request, location) [HTTP monitor]
The number of request executions that end with an HTTP status code.
Response size (by location) [HTTP monitor]
The response size of the sum of all requests.
TCP connect time (by location) [HTTP monitor]
The time taken to establish the TCP connection to the server (including SSL) for the sum of all requests.
Time to first byte (by location) [HTTP monitor]
The time taken until the first byte of the response is received from the server, relevant application caches, or a local resource. Calculated for the sum of all requests.
TLS handshake time (by location) [HTTP monitor]
The time taken to complete the TLS handshake for the sum of all requests.
Duration threshold [HTTP monitor]
The performance threshold for the sum of all requests.
Result status count (by location) [HTTP monitor]
The number of monitor executions with success/failure result status.
Status code count (by location) [HTTP monitor]
The number of monitor executions that end with an HTTP status code.
Node health status count [synthetic]
The number of private Synthetic nodes and their health status.
Private location health status count [synthetic]
The number of private Synthetic locations and their health status.
Monitor availability [Network Availability monitor]
Monitor availability excluding maintenance windows [Network Availability monitor]
DNS request resolution time [Network Availability request]
Number of successful ICMP packets [Network Availability request]
Number of ICMP packets [Network Availability request]
ICMP request execution time [Network Availability request]
ICMP round trip time [Network Availability request]
ICMP request success rate [Network Availability request]
Request availability [Network Availability request]
Request availability excluding maintenance windows [Network Availability request]
Request execution time [Network Availability request]
Execution count (by status) [Network Availability request]
Step availability [Network Availability step]
Step availability excluding maintenance windows [Network Availability step]
Step execution time [Network Availability step]
Execution count (by status) [Network Availability step]
Step success rate [Network Availability step]
TCP request connection time [Network Availability request]
Monitor execution time [Network Availability monitor]
Execution count (by status) [Network Availability monitor]
Monitor success rate [Network Availability monitor]
Availability rate (by location) [third-party monitor]
The availability rate of third-party monitors.
Availability rate - excl. maintenance windows (by location) [third-party monitor]
The availability rate of third-party monitors excluding maintenance windows.
Error count [third-party monitor]
The number of detected errors; split by monitor, step, error code.
Error count (by location) [third-party monitor]
The number of detected errors; split by monitor, location, step, error code.
Test quality rate [third-party monitor]
The test quality rate. Calculated by dividing successful steps by the total number of steps executed; split by monitor.
Test quality rate (by location) [third-party monitor]
The test quality rate. Calculated by dividing successful steps by the total number of steps executed; split by monitor, location.
Response time [third-party monitor]
The response time of third-party monitors; split by monitor.
Response time (by location) [third-party monitor]
The response time of third-party monitors; split by monitor, location.
Response time (by step) [third-party monitor]
The response time of third-party monitors; split by step.
Response time (by step, location) [third-party monitor]
The response time of third-party monitors; split by step, location.
.NET garbage collection (# Gen 0)
Number of completed GC runs that collected objects in Gen0 Heap within the given time range, https://dt-url.net/i1038bq
.NET garbage collection (# Gen 1)
Number of completed GC runs that collected objects in Gen1 Heap within the given time range, https://dt-url.net/i1038bq
.NET garbage collection (# Gen 2)
Number of completed GC runs that collected objects in Gen2 Heap within the given time range, https://dt-url.net/i1038bq
.NET % time in GC
Percentage time spend within garbage collection
.NET % time in JIT
.NET % time in Just in Time compilation
.NET average number of active threads
.NET memory consumption (Large Object Heap)
.NET memory consumption for objects within Large Object Heap, https://dt-url.net/es238z7
.NET memory consumption (heap size Gen 0)
.NET memory consumption for objects within heap Gen0, https://dt-url.net/i1038bq
.NET memory consumption (heap size Gen 1)
.NET memory consumption for objects within heap Gen1, https://dt-url.net/i1038bq
.NET memory consumption (heap size Gen 2)
.NET memory consumption for objects within heap Gen2, https://dt-url.net/i1038bq
Bytes in all heaps
Gen 0 Collections
Gen 1 Collections
Gen 2 Collections
Logical threads
Physical threads
Committed bytes
Reserved bytes
Time in GC
Contention rate
Queue length
Gen 0 Heap size
Gen 1 Heap size
Gen 2 Heap size
.NET managed thread pool active io completion threads
.NET managed thread pool active io completion threads
.NET managed thread pool queued work items
.NET managed thread pool queued work items
.NET managed thread pool active worker threads
.NET managed thread pool active worker threads
Average enqueue time
Average enqueue time increase
Current connections count
Dequeue count
Dispatch count
Enqueue count
Expired count
Inflight count
Memory usage
Queue size
Store usage
Temp usage
Total connections count
Total consumer count
Total producer count
Blocks read
Blocks removed
Blocks replicated
Blocks number
Blocks verified
Blocks written
Bytes read
Bytes written
Cache capacity
Cache used
DataNode capacity
Remaining capacity
Total capacity
Used capacity
Capacity used non DFS
Corrupted blocks
DataNode cache capacity
DataNode cache used
DataNode Dfs Used
Estimated capacity total loses
Appended files
Created files
Deleted files
Renamed files
Files number
DataNode num blocks cached
DataNode num blocks failed to cache
DataNode num blocks failed to uncache
Dead DataNodes
Dead decommissioning DataNodes
Live decommissioning DataNodes
Number of decommissioning DataNodes
DataNode num failed volumes
Live DataNodes
Number of stale dataNodes
Number of missing blocks
Pending deletion blocks
Pending replication blocks
DataNode capacity remaining
Scheduled replication blocks
Total load
Under replicated blocks
Volume failures total
Allocated containers
Allocated memory
Allocated memory
Allocated CPU in virtual cores
Completed applications
Failed applications
Killed applications
Pending applications
Running applications
Submitted applications
Available memory
Available memory
Available CPU in virtual cores
Completed containers
Failed containers
Initializing containers
Killed containers
Launched containers
Running containers
Completed jobs
Failed jobs
Killed jobs
Preparing jobs
Running jobs
Completed maps
Failed maps
Killed maps
Running maps
Waiting maps
Allocated containers
Active NodeManagers
Decommissioned NodeManagers
Lost NodeManagers
Rebooted NodeManagers
Unhealthy NodeManagers
Pending memory requests
Pending CPU in virtual cores requests
Completed reduces
Failed reduces
Killed reduces
Running reduces
Waiting reduces
Reserved memory
Reserved CPU in virtual cores requests
Shuffle connections
Shuffle output
Failed shuffle output
Fine shuffle output
Solr document cache evictions
Solr document cache hits
Solr document cache inserts
Solr document cache lookups
Solr field cache evictions
Solr field cache hits
Solr field cache inserts
Solr field cache lookups
Solr filter cache evictions
Solr filter cache hits
Solr filter cache inserts
Solr filter cache lookups
Solr query result cache evictions
Solr query result cache hits
Solr query result cache inserts
Solr query result cache lookups
Solr searcher deleted documents
Solr searcher max documents
Solr searcher current documents
Solr number of queries
Solr document cache evictions
Solr document cache hits
Solr document cache inserts
Solr document cache lookups
Solr field cache evictions
Solr field cache hits
Solr field cache inserts
Solr field cache lookups
Solr filter cache evictions
Solr filter cache hits
Solr filter cache inserts
Solr filter cache lookups
Solr query result cache evictions
Solr query result cache hits
Solr query result cache inserts
Solr query result cache lookups
Solr searcher deleted documents
Solr searcher max documents
Solr searcher current documents
Solr number of queries
Solr number of additions
Solr number of deletes by ID
Solr number of deletes by query
Solr number of update errors
Solr document cache evictionsv (by core)
Solr document cache hits (by core)
Solr document cache inserts (by core)
Solr document cache lookups (by core)
Solr field cache evictions (by core)
Solr field cache hits (by core)
Solr field cache inserts (by core)
Solr field cache lookups (by core)
Solr filter cache evictions (by core)
Solr filter cache hits (by core)
Solr filter cache inserts (by core)
Solr filter cache lookups (by core)
Solr query result cache evictions (by core)
Solr query result cache hits (by core)
Solr query result cache inserts (by core)
Solr query result cache lookups (by core)
Solr searcher deleted documents (by core)
Solr searcher max documents (by core)
Solr searcher current documents (by core)
Solr number of queries (by core)
Solr number of additions (by core)
Solr number of deletes by ID (by core)
Solr number of deletes by query (by core)
Solr number of update errors (by core)
Solr number of additions
Solr number of deletes by ID
Solr number of deletes by query
Solr number of update errors
Max active
Max active (global)
Max total
Max total (global)
Num active
Num active (global)
Num idle
Num idle (global)
Num waiters
Num waiters (global)
Wait count
Wait count (global)
Tomcat received bytes / sec
Tomcat sent bytes / sec
Tomcat busy threads
Tomcat idle threads
Tomcat request count / sec
Compaction rate
Compaction completed tasks
Compaction pending tasks
KeyCache hit rate
RangeSlice failure rate
RangeSlice latency 95th percentile
RangeSlice rate
RangeSlice timeout rate
RangeSlice unavailable rate
Read failure rate
Read latency 95th percentile
Read rate
Read timeout rate
Read unavailable rate
RowCache hit rate
Exception count
Storage load
Storage total hints
Mutation pending tasks
Read pending tasks
ReadRepair pending tasks
Write failure rate
Write latency 95th percentile
Write rate
Write timeout rate
Write unavailable rate
Live SSTable count
Cache hits
Cache misses
Cache size
DNS request count
DNS request duration
DNS request size
DNS request type count
DNS response rcode count
DNS response FORMERR rcode count
DNS response NOERROR rcode count
DNS response NOTAUTH rcode count
DNS response NOTIMP rcode count
DNS response NOTZONE rcode count
DNS response NXDOMAIN rcode count
DNS response REFUSED rcode count
DNS response SERVFAIL rcode count
DNS response XRRSET rcode count
DNS response YXDOMAIN rcode count
DNS response size
Forward max concurrent reject count
Forward request count
Forward request duration
Forward response rcode count
GC duration
Number of goroutines
Health request duration
Number of cumulative bytes allocated for heap objects
Size of memory in profiling bucket hash tables
Cumulative count of heap objects freed
Size of memory in garbage collection metadata
Number of bytes allocated and not yet freed
Number of bytes in idle (unused) spans
Number of bytes in in-use spans
Number of allocated heap objects
Number of bytes of physical memory returned to the OS
Number of bytes of heap memory obtained from the OS
Number of pointer lookups performed by the runtime
Cumulative count of heap objects allocated
Number of bytes of allocated mcache structures
Size of memory obtained from the OS for mcache structures
Number of bytes of allocated mspan structures
Size of memory obtained from the OS for mspan structures
Target heap size of the next GC cycle
Size of memory in miscellaneous off-heap runtime allocations
Number of bytes in stack spans
Number of bytes of stack memory obtained from the OS
Total size of memory obtained from the OS
Panic count
Number of threads
Copy
Delete
Get
Head
Post
Put
auth cache hits
auth cache misses
bulk requests
database reads
database writes
http_2xx
http_3xx
http_4xx
http_5xx
open databases
open os files
request time
request time max
requests
temporary view reads
view reads
cluster basicStats diskFetches
cluster count membase
cluster count memcached
cluster samples cmd_get
cluster samples cmd_set
cluster samples curr_items
cluster samples ep_cache_miss_rate
cluster samples ep_num_value_ejects
cluster samples ep_oom_errors
cluster samples ep_tmp_oom_errors
cluster samples ops
cluster samples swap_used
cluster status healthy
cluster status unhealthy
cluster status warmup
cluster storageTotals hdd free
cluster storageTotals hdd quotaTotal
cluster storageTotals hdd total
cluster storageTotals hdd used
cluster storageTotals hdd usedByData
cluster storageTotals ram percentageUsage
cluster storageTotals ram quotaTotal
cluster storageTotals ram quotaTotalPerNode
cluster storageTotals ram quotaUsed
cluster storageTotals ram quotaUsedPerNode
cluster storageTotals ram total
cluster storageTotals ram used
cluster storageTotals ram usedByData
liveview basicStats diskFetches
liveview basicStats diskUsed
liveview basicStats memUsed
liveview samples cmd_get
liveview samples cmd_set
liveview samples couch_docs_data_size
liveview samples couch_total_disk_size
liveview samples disk_write_queue
liveview samples ep_cache_miss_rate
liveview samples ep_mem_high_wat
liveview samples ep_num_value_ejects
liveview samples ops
node interestingstats curr_items
node interestingstats ep_bg_fetched
node interestingstats mem_used
node interestingstats ops
node status healthy
node status unhealthy
node status warmup
node systemstats swap_used
Custom Device Count
Documents count
Deleted documents
Field data evictions
Field data size
Query cache count
Query cache size
Query cache evictions
Segment count
Replica shards
Indices count
Breakers field data estimated size
Breakers field data limit size
Breakers field data overhead
Breakers field data tripped
Breakers parent data estimated size
Breakers parent data limit size
Breakers parent data overhead
Breakers parent data tripped
Breakers request estimated size
Breakers request limit size
Breakers request overhead
Breakers request tripped
Indices flush total
Indices flush time
Indexing delete
Indexing failed
Indexing time
Indexing total
Indexing noop update total
Indexing throttle time
Merge total
Merge auto throttle size
Merge total documents
Merge total size
Merge stopped time
Merge throttled time
Merge total time
Indices recovery current as source
Indices recovery current as target
Indices recovery throttle time
Indicies refresh total
Indicies refresh time
Indices request cache evictions
Indices request cache hit count
Indices request cache size
Indices request cache miss count
Fetch time
Number of fetches
Total search time
Query time
Number of queries
Scroll time
Number of scrolls
Store size
Store throttle time
Indices suggest time
Indices suggest total
Indices translog operations
Indices translog size
Indices warmer total
Indices warmer time
Thread pools analyze completed
Thread pools analyze queue
Thread pools analyze rejected
Thread pools analyze threads
Thread pools bulk completed
Thread pools bulk queue
Thread pools bulk rejected
Thread pools bulk threads
Thread pools ccr completed
Thread pools ccr queue
Thread pools ccr rejected
Thread pools ccr threads
Thread pools flush completed
Thread pools flush queue
Thread pools flush rejected
Thread pools flush threads
Thread pools force merge completed
Thread pools force merge queue
Thread pools force merge rejected
Thread pools force merge threads
Thread pools generic completed
Thread pools generic queue
Thread pools generic rejected
Thread pools generic threads
Thread pools get completed
Thread pools get queue
Thread pools get rejected
Thread pools get threads
Thread pools index completed
Thread pools index queue
Thread pools index rejected
Thread pools index threads
Thread pools listener completed
Thread pools listener queue
Thread pools listener rejected
Thread pools listener threads
Thread pools percolate completed
Thread pools percolate queue
Thread pools percolate rejected
Thread pools percolate threads
Thread pools refresh completed
Thread pools refresh queue
Thread pools refresh rejected
Thread pools refresh threads
Thread pools search completed
Thread pools search queue
Thread pools search rejected
Thread pools search threads
Thread pools snapshot completed
Thread pools snapshot queue
Thread pools snapshot rejected
Thread pools snapshot threads
Thread pools write completed
Thread pools write queue
Thread pools write rejected
Thread pools write threads
Active primary shards
Active shards
Delayed unassigned shards
Initializing shards
Number of data nodes
Number of nodes
Relocating shards
Status green
Status red
Status unknown
Status yellow
Unassigned shards
Indexing time
Indexing total
Merge total
Merge auto throttle size
Merge total documents
Merge total size
Merge stopped time
Merge throttled time
Merge total time
Number of fetches
Total search time
Number of queries
Number of scrolls
OS memory usage
OS CPU usage
Process CPU usage
Elasticsearch virtual bytes
Process group total CPU time during GC suspensions
This metric provides statistics about CPU usage for process groups of garbage-collected technologies. The metric value is the sum of CPU time used during garbage collector suspensions for every process (including its workers) in a process group. It has a "Process Group" dimension.
Process group total CPU time
This metric provides the total CPU time used by a process group. The metric value is the sum of CPU time every process (including its workers) of the process group uses. The result is expressed in microseconds. It can help to identify the most CPU-intensive technologies in the monitored environment. It has a "Process Group" dimension.
Process total CPU time during GC suspensions
This metric provides statistics about CPU usage for garbage-collected processes. The metric value is the sum of CPU time used during garbage collector suspensions for all process workers. It has a "Process" dimension (dt.entity.process_group_instance).
Process total CPU time
This metric provides the CPU time used by a process. The metric value is the sum of CPU time every process worker uses. The result is expressed in microseconds. It has a "Process" dimension (dt.entity.process_group_instance).
Process CPU usage
This metric provides the percentage of the CPU usage of a process. The metric value is the sum of CPU time every process worker uses divided by the total available CPU time. The result is expressed in percentage. A value of 100% indicates that the process uses all available CPU resources of the host. It has a "Process" dimension (dt.entity.process_group_instance).
z/OS General CPU time
The time spent on the general-purpose central processor (GCP) after process start per minute
z/OS General CPU usage
The percent of the general-purpose central processor (GCP) used
Process file descriptors max
This metric provides statistics about the file descriptor resource limits. It is supported on Linux. The metric value is the total limit of file descriptors that all process workers can open. It is sent once per minute with a 10-second granularity - six samples are aggregated every minute. It has a "Process" dimension (dt.entity.process_group_instance).
Process file descriptors used per PID
This metric provides the file descriptor usage statistics. It is supported on Linux. The metric value is the highest percentage of the currently used file descriptor limit among process workers. It is sent once per minute with a 10-second granularity - six samples are aggregated every minute. It offers two dimensions: "Process" (dt.entity.process_group_instance) and pid dimension corresponding to the PID with the highest percentage of available descriptors usage.
Process file descriptors used
This metric provides statistics about file descriptor usage. It is supported on Linux. The metric value is the total number of file descriptors all process workers have opened. You can use it to detect processes that may cause the system to reach the limit of open file descriptors.It has a "Process" dimension (dt.entity.process_group_instance).
Process I/O read bytes
This metric provides statistics about the I/O read operations of a process. The metric value is a sum of I/O bytes read from the storage layer by all process workers per second. High values help to identify bottlenecks reducing process performance caused by the slow read speed of the storage device. It has a "Process" dimension (dt.entity.process_group_instance).
Process I/O bytes total
This metric provides statistics about I/O operations for a process. The metric value is a sum of I/O bytes read and written by all process workers per second. It has a "Process" dimension (dt.entity.process_group_instance).
Process I/O write bytes
This metric provides statistics about the I/O write operations of a process. The metric value is a sum of I/O bytes written to the storage layer by all process workers per second. High values help to identify bottlenecks reducing process performance caused by the slow write speed of the storage device. It has a "Process" dimension (dt.entity.process_group_instance).
Process I/O requested read bytes
This metric provides the statistics about the I/O read operations a process requests. It is supported only on Linux and AIX. The metric value is a sum of I/O bytes requested to be read from the storage by worker processes per second. It includes additional read operations, such as terminal I/O. It does not indicate the actual disk I/O operations, as some parts of the read operation might have been satisfied from the page cache. This metric has a "Process" dimension (dt.entity.process_group_instance).
Process I/O requested write bytes
This metric provides the statistics about the I/O write operations a process requests. It is supported on Linux and AIX. The metric value is a sum of I/O bytes requested to be written to the storage by PGI processes per second. It includes additional write operations, such as terminal I/O. It does not indicate the actual disk I/O operations, as some parts of the write operation might have been satisfied from the page cache. This metric has a "Process" dimension (dt.entity.process_group_instance).
Process resource exhausted memory counter
This metric provides the counter of "Memory resource exhausted" events for a process. The metric value is the number of events all process workers generated in a minute. JVM generates the memory resource exhausted events when it is out of memory. This metric helps to identify Java processes with excessive memory usage. It has a "Process" dimension (dt.entity.process_group_instance).
Process page faults counter
This metric provides the rate of page faults of a process. The metric value is the sum of page faults per time unit of every process worker. A page fault occurs when the process attempts to access a memory block not stored in the RAM. It means that the block has to be identified in the virtual memory and then loaded from the storage. The lower values are better. A high number of page faults may indicate reduced performance due to insufficient memory size. It has a "Process" dimension (dt.entity.process_group_instance).
Process memory usage
This metric provides the percentage of memory used by a process. It helps to identify processes with high memory resource consumption and memory leaks. The metric value is the sum of the memory used by every process worker divided by the total available memory in the host. The result is expressed in percentage. It has a "Process" dimension (dt.entity.process_group_instance).
Process memory
This metric provides the memory usage of a process. It helps to identify processes with high memory resource consumption and memory leaks. The metric value represents the sum of every process worker's used memory size (including shared memory). The result is expressed in bytes. It has a "Process" dimension (dt.entity.process_group_instance).
Retransmission base received per second on host
Retransmission base received
Retransmission base received per second
Retransmission base sent per second on host
Retransmission base sent
Retransmission base sent per second
Retransmitted packets received per second on host
Retransmitted packets received
Retransmitted packets received per second
Retransmitted packets sent per second on host
Retransmitted packets
Retransmitted packets sent per second
Packet retransmissions
Packet retransmissions
Incoming packet retransmissions
Incoming packet retransmissions
Outgoing packet retransmissions
Outgoing packet retransmissions
Packets received
Packets received per second
Packets sent
Packets sent per second
TCP connectivity
Percentage of successfully established TCP sessions
New session received per second on host
New session received
New session received per second
New session received
New session received per second on localhost
Session reset received per second on host
Session reset received
Session reset received per second
Session reset received
Session reset received per second on localhost
Session timeout received per second on host
Session timeout received
Session timeout received per second
Session timeout received
Session timeout received per second on localhost
Traffic
Traffic in
Incoming network traffic at PGI
Traffic out
Outgoing network traffic from PGI
Bytes received
Bytes received per second
Bytes sent
Bytes sent per second
Ack-round-trip time
Average latency between outgoing TCP data and ACK
Requests
Requests per second
Server responsiveness
Round-trip time
Average TCP session handshake RTT
Throughput
Used network bandwidth
Process count per process group
This metric provides the number of processes in a process group. It can tell how many instances of the technology are running in the monitored environment. It has a "Process Group" dimension.
Worker processes
This metric provides the number of process workers. Too few worker processes may lead to performance degradation, while too many may waste available resources. Configuration of workers should be suitable for average workload and be able to scale up with higher demand. It has a "Process" dimension (dt.entity.process_group_instance).
Process resource exhausted threads counter
This metric provides the counter of "Thread resource exhausted" events for a process. The metric value is the number of events all process workers generated in a minute. JVM generates the thread resource exhausted events when it cannot create a new thread. This metric helps to identify Java processes with excessive memory usage. It has a "Process" dimension (dt.entity.process_group_instance).
z/OS zIIP time
The time spent on the system z integrated information processor (zIIP) after process start per minute
z/OS zIIP eligible time
The zIIP eligible time spent on the general-purpose central processor (GCP) after process start per minute
Go: 502 responses
The number of responses that indicate invalid service responses produced by an application.
Go: Response latency
The average response time from the application to clients.
Go: 5xx responses
The number of responses that indicate repeatedly crashing apps or response issues from applications.
Go: Total requests
The number of all requests representing the overall traffic flow.
Go: Heap idle size
The amount of memory not assigned to the heap or stack. Idle memory can be returned to the operating system or retained by the Go runtime for later reassignment to the heap or stack.
Go: Heap live size
The amount of memory considered live by the Go garbage collector. This metric accumulates memory retained by the most recent garbage collector run and allocated since then.
Go: Heap allocated Go objects count
The number of Go objects allocated on the Go heap.
Go: Committed memory
The amount of memory committed to the Go runtime heap.
Go: Used memory
The amount of memory used by the Go runtime heap.
Go: Garbage collector invocation count
The number of Go garbage collector runs.
Go: Go to C language (cgo) call count
The number of Go to C language (cgo) calls.
Go: Go runtime system call count
The number of system calls executed by the Go runtime. This number doesn't include system calls performed by user code.
Go: Average number of active Goroutines
The average number of active Goroutines.
Go: Average number of inactive Goroutines
The average number of inactive Goroutines.
Go: Application Goroutine count
The number of Goroutines instantiated by the user application.
Go: System Goroutine count
The number of Goroutines instantiated by the Go runtime.
Go: Worker thread count
The number of operating system threads instantiated to execute Goroutines. Go doesn't terminate worker threads; it keeps them in a parked state for future reuse.
Go: Parked worker thread count
The number of worker threads parked by Go runtime. A parked worker thread doesn't consume CPU cycles until the Go runtime unparks the thread.
Go: Out-of-work worker thread count
The number of worker threads whose associated scheduling context has no more Goroutines to execute. When this happens, the worker thread attempts to steal Goroutines from another scheduling context or the global run queue. If the stealing fails, the worker thread parks itself after some time. This same mechanism applies to a high workload scenario. When an idle scheduling context exists, the Go runtime unparks a parked worker thread and associates it with the idle scheduling context. The unparked worker thread is now in the 'out of work' state and starts Goroutine stealing.
Go: Idle scheduling context count
The number of scheduling contexts that have no more Goroutines to execute and for which Goroutine acquisition from the global run queue or other scheduling contexts has failed.
Go: Global Goroutine run queue size
The number of Goroutines in the global run queue. Goroutines are placed in the global run queue if the worker thread used to execute a blocking system call can't acquire a scheduling context. Scheduling contexts periodically acquire Goroutines from the global run queue.
Backend bytes received
Backend bytes sent
Connection errors
Response errors
Queued requests
Response time
Current backend sessions
Session usage backend
Bytes received
Bytes sent
Frontend bytes received
Frontend bytes sent
Request errors
Requests
Current frontend sessions
Session usage frontend
HTTP 4xx errors
HTTP 5xx errors
Idle percentage
Sessions
Consumer count
Delivering count
Message count
Messages added
Scheduled count
JVM loaded classes
The number of classes that are currently loaded in the Java virtual machine, https://dt-url.net/l2c34jw
JVM total number of loaded classes
The total number of classes that have been loaded since the Java virtual machine has started execution, https://dt-url.net/d0y347x
JVM unloaded classes
The total number of classes unloaded since the Java virtual machine has started execution, https://dt-url.net/d7g34bi
Garbage collection total activation count
The total number of collections that have occurred for all pools, https://dt-url.net/oz834vd
Garbage collection total collection time
The approximate accumulated collection elapsed time in milliseconds for all pools, https://dt-url.net/oz834vd
Garbage collection suspension time
Time spent in milliseconds between GC pause starts and GC pause ends, https://dt-url.net/zj434js
Garbage collection count
The total number of collections that have occurred in that pool, https://dt-url.net/z9034yg
Garbage collection time
The approximate accumulated collection elapsed time in milliseconds in that pool, https://dt-url.net/z9034yg
JVM heap memory pool committed bytes
The amount of memory (in bytes) that is guaranteed to be available for use by the Java virtual machine, https://dt-url.net/1j034o0
JVM heap memory max bytes
The maximum amount of memory (in bytes) that can be used for memory management, https://dt-url.net/1j034o0
JVM heap memory pool used bytes
The amount of memory currently used by the memory pool (in bytes), https://dt-url.net/1j034o0
JVM runtime free memory
An approximation to the total amount of memory currently available for future allocated objects, measured in bytes, https://dt-url.net/2mm34yx
JVM runtime max memory
The maximum amount of memory that the virtual machine will attempt to use, measured in bytes, https://dt-url.net/lzq34mm
JVM runtime total memory
The total amount of memory currently available for current and future objects, measured in bytes, https://dt-url.net/otu34eo
Process memory allocation bytes
Process memory allocation objects count
Process memory survived objects bytes
Process memory survived objects count
DiscoveryClient-HTTPClient_RequestConnectionTimer_count
DiscoveryClient-HTTPClient_RequestConnectionTimer_max
DiscoveryClient-HTTPClient_RequestConnectionTimer_min
DiscoveryClient-HTTPClient_RequestConnectionTimer_totalTime
DiscoveryClient_Failed
DiscoveryClient_Reregister
DiscoveryClient_Retry
ZoneStats_CircuitBreakerTrippedCount
ZoneStats_InstanceCount
Alive workers
Alive workers
Master apps
Master apps
Processing time - count
Processing time - count
Processing time - mean
Processing time - mean
Processing time - one minute rate
Processing time - one minute rate
Active jobs
Active jobs
Total jobs
Total jobs
Failed stages
Failed stages
Running stages
Running stages
Waiting stages
Waiting stages
Waiting apps
Waiting apps
Worker cores free
Worker cores free
Worker cores used
Worker cores used
Worker executors
Worker executors
Worker free memory (MB)
Worker free memory (MB)
Worker memory used (MB)
Worker memory used (MB)
Master workers
Master workers
JVM average number of active threads
JVM average number of inactive threads
JVM thread count
The current number of live threads including both daemon and non-daemon threads, https://dt-url.net/s02346y
JVM total CPU time
Active count
Active count (XA)
Available count
Available count (XA)
Created count
Created count (XA)
Destroyed count
Destroyed count (XA)
Idle count
Idle count (XA)
In use count
In use count (XA)
Timed out
Timed out (XA)
Total blocking time
Total blocking time (XA)
Total creation time
Total creation time (XA)
Total get time
Total get time (XA)
Total pool time
Total pool time (XA)
Total usage time
Total usage time (XA)
Wait count
Wait count (XA)
Max pool size
Max pool size (XA)
Jetty busy threads
Jetty total connections
Jetty open connections
Jetty idle threads
Jetty request queue size
Jetty request count
Jetty total response bytes
Kafka connect - Incoming byte rate
Kafka connect - Outgoing byte rate
Kafka connect - Requests
Kafka connect - Request size
Kafka consumer - Incoming byte rate
Kafka consumer - Outgoing byte rate
Kafka consumer - Requests
Kafka consumer - Request size
Kafka broker - Leader election rate
Kafka broker - Unclean election rate
Kafka log - Log flush mean time
Kafka log - Log flush 95th percentile
Kafka broker - Request queue size
Kafka network - FetchConsumer requests per second
Kafka network - FetchFollower requests per second
Kafka network - Produce requests per second
Kafka network - Total time per FetchConsumer request
Kafka network - Total time per FetchFollower request
Kafka network - Total time per Produce request
Kafka producer - Incoming byte rate
Kafka producer - Outgoing byte rate
Kafka producer - Requests
Kafka producer - Request size
Kafka broker - Incoming byte rate
Kafka broker - Outgoing byte rate
Kafka broker - Failed fetch requests
Kafka broker - Failed produce requests
Kafka broker - Messages in rate
Kafka broker - Fetch request rate
Kafka broker - Produce request rate
Kafka broker - Max follower lag
Kafka broker - Leader count
Kafka broker - Partitions
Kafka broker - Under replicated partitions
Kafka broker - ZooKeeper disconnects
Kafka broker - ZooKeeper expires
Kafka broker - Leader election rate
Kafka broker - Unclean election rate
Kafka controller - Active cluster controllers
Kafka controller - Offline partitions
Kafka broker - Partitions
Kafka broker - Under replicated partitions
In use connections
Free connections
In use time
Managed connections
Wait time
Active threads
Pool size
Request count
Memory bytes
Read throughput
Write throughput
Cache usage
Get commands
Set commands
Connections
Memory evictions
Get hits
Get misses
Memory max bytes
Batch requests
Buffer cache hit ratio
Checkpoint pages
Connection memory
Latch waits
Lock waits
Memory grants outstanding
Memory grants pending
Deadlocks
Page life expectancy
Page splits
Blocked processes
Compilations
Re-Compilations
Target server memory
Total server memory
Transactions
User connections
Active clients
Available connections
Command operations
Current connections
Current queue
Data size
Index size
Storage size
Delete operations
Getmore operations
Indexes
Insert operations
Message asserts
Objects
Query operations
Regular asserts
Resident memory
Rollover asserts
Update operations
User asserts
Virtual memory
Warning asserts
com delete
com delete multi
com insert
com insert select
com replace select
com select
com update
com update multi
connection errors
created tmp disktables
created tmp tables
innodb buffer pool pages data
innodb buffer pool pages dirty
innodb buffer pool pages free
innodb buffer pool pages total
innodb buffer pool size
innodb data reads
innodb data writes
qcache free memory
qcache hits
qcache not cached
qcache queries in cache
queries
questions
slow queries
slow queries rate
table locks immediate
table locks waited
connected threads
created threads
running threads
NTP time offset
Bytes received
Bytes received
Bytes transmitted
Bytes transmitted
Retransmitted packets
Number of retransmitted packets
Packets received
Number of packets received
Packets transmitted
Number of packets transmitted
Retransmission
Percentage of retransmitted packets
Round trip time
Round trip time in milli seconds. Aggregates data from active sessions
Network traffic
Summary of incoming and outgoing network traffic in bits per second
Incoming traffic
Incoming network traffic in bits per second
Outgoing traffic
Outgoing network traffic in bits per second
Nginx Plus cache free space
Nginx Plus cache hit ratio
Nginx Plus cache hits
Nginx Plus cache misses
Nginx Plus cache used space
Active Nginx Plus server zones
Inactive Nginx Plus server zones
Nginx Plus server zone requests
Nginx Plus server zone traffic in
Nginx Plus server zone traffic out
Healthy Nginx Plus upstream servers
Nginx Plus upstream requests
Nginx Plus upstream traffic in
Nginx Plus upstream traffic out
Unhealthy Nginx Plus upstream servers
Node.js: Active handles
Average number of active handles in the event loop
Node.js: Event loop tick frequency
Average number of event loop iterations (per 10 seconds interval)
Node.js: Event loop latency
Average latency of expected event completion
Node.js: Work processed latency
Average latency of a work item being enqueued and callback being called
Node.js: Event loop tick duration
Average duration of an event loop iteration (tick)
Node.js: Event loop utilization
Event loop utilization represents the percentage of time the event loop has been active
Node.js: GC heap used
Total size of allocated V8 heap used by application data (post-GC memory snapshot)
Node.js: Process Resident Set Size (RSS)
Amount of space occupied in the main memory
Node.js: V8 heap total
Total size of allocated V8 heap
Node.js: V8 heap used
Total size of allocated V8 heap used by application data (periodic memory snapshot)
Node.js: Number of active threads
Average number of active Node.js worker threads
Background CPU usage
Foreground CPU usage
CPU idle
CPU other processes
Physical read bytes
Physical write bytes
Total wait time
Allocated PGA
PGA aggregate Limit
PGA aggregate target
PGA used for work areas
Shared pool free
Redo log space wait time
Redo size increase
Redo write time
Buffer cache hit
Sorts in memory
Time spent on connection management
Time spent on other activities
PL SQL exec elapsed time
SQL exec time
Time spent on SQL parsing
Active sessions
All sessions
User calls count
Application wait time
Cluster wait time
Concurrency wait time
CPU time
Elapsed time
User I/O wait time
Buffer gets
Direct writes
Disk reads
Executions
Parse calls
Rows processed
Total space
Used space
Number of wait events
Total wait time
Background CPU usage
Foreground CPU usage
CPU idle
CPU other processes
Physical read bytes
Physical write bytes
Total wait time
Allocated PGA
PGA aggregate Limit
PGA aggregate target
PGA used for work areas
Shared pool free
Redo log space wait time
Redo size increase
Redo write time
Time spent on connection management
Time spent on other activities
PL SQL exec elapsed time
SQL exec time
Time spent on SQL parsing
Active sessions
All sessions
User calls count
Application wait time
Cluster wait time
Concurrency wait time
CPU time
Elapsed time
User I/O wait time
Buffer gets
Direct writes
Disk reads
Executions
Parse calls
Rows processed
Total space
Used space
Number of wait events
Total wait time
Buffer cache hit
Sorts in memory
Accepted Connection
Active processes
Waiting connections
Max number of waiting connections
Slow requests
Total processes
PHP GC collected count
PHP GC collection duration
PHP GC effectiveness
PHP OPCache JIT buffer free
PHP OPCache JIT buffer size
PHP OPCache free memory
PHP OPCache used memory
PHP OPCache wasted memory
PHP OPCache restarts due to lack of keys
PHP OPCache manual restarts
PHP OPCache restarts due to out of memory
PHP OPCache blocklist misses
PHP OPCache number of cached keys
PHP OPCache number of cached scripts
PHP OPCache hits
PHP OPCache max number of keys
PHP OPCache misses
PHP OPCache interned string buffer size
PHP OPCache number of interned strings
PHP OPCache interned string memory usage
PHP average number of active threads
PHP average number of inactive threads
Buffer hits
Block reads
Cache hit ratio
Index scans
Rows returned by index scans
Active connections
Rows returned by sequential scans
Rows deleted
Rows inserted
Rows updated
Commits
Rollbacks
Python GC collected items from gen 0
Python GC collected items from gen 1
Python GC collected items from gen 2
Python GC collections number in gen 0
Python GC collections number in gen 1
Python GC collections number in gen 2
Python GC time in gen 0
Python GC time in gen 1
Python GC time in gen 2
Python GC uncollectable items in gen 0
Python GC uncollectable items in gen 1
Python GC uncollectable items in gen 2
Number of memory blocks allocated by Python
Number of active Python threads
auto-delete queues without consumers
channels
cluster channels
cluster connections
cluster consumers
cluster exchanges
cluster ack messages
cluster delivered and get messages
cluster published messages
cluster ready messages
cluster redelivered messages
cluster unroutable messages
cluster unacknowledged messages
cluster node failed
cluster node ok
cluster crashed queues
cluster queues down
cluster flow queues
cluster idle queues
cluster running queues
connections blocked
connections
consumers
available disk space
file descriptors usage
memory usage
messages ack
messages delivered and get
messages published
message ready
messages redelivered
messages unroutable
messages unacknowledged
node status
processes usage
queues
sockets usage
status failed
status ok
topn ack
topn consumers
topn deliver/get
topn ready messages
topn unacknowledged messages
topn publish
Database average key TTL
Blocked clients
Connected clients
Connected replicas
Evicted keys
Expired keys
Database expired keys
Cache hit ratio
Database keys
Keyspace hits
Keyspace misses
Last interaction with master
Max memory
Memory fragmentation
Memory usage
Rejected connections
Response time
Replica status
Slow queries
Total commands processed
Total connections received
Used memory
Cache hit ratio
Cache hits for passes
Cache hits
Cache misses
Cache passes
Backend connections
Backend connections failed
Backend connections reused
Sessions accepted
Sessions dropped
Sessions queued
Threads failed
Maximum number of threads
Minimum number of threads
Total number of threads
Requests
Traffic
Carbon - Database Read time 75th Percentile
Carbon - Database Read Events rate in 15 minutes window
Carbon - Database Write time 75th Percentile
Carbon - Database Write Events rate in 15 minutes window
Carbon - Number of faulty services
Carbon - System response time average
Carbon - System response time maximum
Active http listener connections
Active http sender connections
Active https listener connections
Active https sender connections
HTTP - Average time taken to read the response from gateway to backend
HTTP - Average time taken to read request by gateway which is sent by the client
HTTP - Average time taken to write the request from gateway to the backend
HTTP - Average time taken to write the request from gateway to client app
HTTP - Average latency
HTTP - Average backend latency
HTTP - Average request mediation latency
HTTP - Average response mediation latency
HTTPS - Average time taken to read the response from gateway to backend
HTTPS - Average time taken to read request by gateway which is sent by the client
HTTPS - Average time taken to write the request from gateway to the backend
HTTPS - Average time taken to write the request from gateway to client app
HTTPS - Average latency
HTTPS - Average backend latency
HTTPS - Average request mediation latency
HTTPS - Average response mediation latency
Dropped connections
Number of dropped connections
Handled connections
Number of successfully finished and closed requests
Reading connections
Number of connections which are receiving data from the client
Socket backlog waiting time
Average time needed to queue and handle incoming connections
Waiting connections
Number of connections with no active requests
Writing connections
Number of connections which are sending data to the client
Active worker threads
Number of active worker threads
Idle worker threads
Number of idle worker threads
Maximum worker threads
Maximum number of worker threads
Requests
Number of requests
Traffic
Amount of data transferred
Free pool size
Percent used
Pool size
In use time
Wait time
Number of waiting threads
Live sessions
Active threads
Pool size
Number of requests
Active connections
Current capacity
Failed connection requests
Reconnection failures
Leaked connections
Max capacity
Available connections (idle)
Statement cache size
Statement cache hits
Statement cache misses
Requests for connections
Waiting for connections
z/OS Consumed Service Units per minute
The calculated number of consumed Service Units per minute