Each Dynatrace-supported technology offers multiple "built-in" metrics. Built-in metrics are included in the product out of the box, in some cases as part of built-in extensions.
Metrics that are based on extensions (prefix ext:
) and calculated metrics (prefix calc:
) are custom metrics, not built-in metrics; DDU consumption for these metrics can vary widely depending on how you use Dynatrace.
To view all the metrics available in your environment, use the GET metrics API call. We recommend the following query parameters:
pageSize=500
—to obtain the largest possible number of metrics in one response.fields=displayName,unit,aggregationTypes,dduBillable
—to obtain the same set of fields as you see in these tables.metricSelector=ext:*
—to obtain all metrics coming from extensions.metricSelector=calc:*
—to obtain all calculated metrics.The sections below describe inconsistencies or limitations identified for Dynatrace built-in metrics.
The Other applications metrics section contains metrics captured for mobile and custom applications. These metrics, which start with builtin:apps.other
, are captured without the indication whether it's a mobile or a custom application. However, the "billing" application metrics, which start with builtin:billing.apps
, are split for these application types:
Mobile apps:
builtin:billing.apps.mobile.sessionsWithoutReplayByApplication
builtin:billing.apps.mobile.sessionsWithReplayByApplication
builtin:billing.apps.mobile.userActionPropertiesByMobileApplication
Custom apps:
builtin:billing.apps.custom.sessionsWithoutReplayByApplication
builtin:billing.apps.custom.userActionPropertiesByDeviceApplication
The following "billing" metrics for session count are actually the sum of billed and unbilled user sessions.
builtin:billing.apps.custom.sessionsWithoutReplayByApplication
builtin:billing.apps.mobile.sessionsWithReplayByApplication
builtin:billing.apps.mobile.sessionsWithoutReplayByApplication
builtin:billing.apps.web.sessionsWithReplayByApplication
builtin:billing.apps.web.sessionsWithoutReplayByApplication
If you want to get only the number of billed sessions, set the Type filter to Billed.
Different measurement units are used for similar request duration metrics for mobile and custom apps.
builtin:apps.other.keyUserActions.requestDuration.os
is measured in microseconds while other request duration metrics (builtin:apps.other.requestTimes.osAndVersion
and builtin:apps.other.requestTimes.osAndProvider
) are measured in milliseconds.
Custom metrics are defined or installed by the user, while built-in metrics are by default part of the product. Certain built-in metrics are disabled by default and, if turned on, will consume DDUs.These metrics cover a wide range of supported technologies, including Apache Tomcat, NGINX, Couchbase, RabbitMQ, Cassandra, Jetty, and many others.
A custom metric is a new type of metric that offers a user-provided metric identifier and unit of measure. The semantics of custom metrics are defined by you and aren't included in the default OneAgent installation. Custom metrics are sent to Dynatrace through various interfaces. Following the definition of a custom metric, the metric can be reported for multiple monitored components. Each component’s custom metric results in a separate timeseries.
For example, if you define a new custom metric called Files count
that counts the newly created files within a directory, this new metric can be collected either for one host or for two individual hosts. Collecting the same metric for two individual hosts results in two timeseries of the same custom metric type, as shown in the example below:
For the purposes of calculating monitoring consumption, collecting the same custom metric for two hosts counts as two separate custom metrics.
Reported error count (by OS, app version) [custom]
The number of all reported errors.
Session count (by OS, app version) [custom]
The number of captured user sessions.
Session count (by OS, app version, crash replay feature status) [mobile]
The number of captured user sessions.
Session count (by OS, app version, full replay feature status) [mobile]
The number of captured user sessions.
Reported error count (by OS, app version) [mobile]
The number of all reported errors.
User action rate - affected by JavaScript errors (by key user action, user type) [web]
The percentage of key user actions with detected JavaScript errors.
Apdex (by key user action) [web]
The average Apdex rating for key user actions.
Action count - custom action (by key user action, browser) [web]
The number of custom actions that are marked as key user actions.
Action count - load action (by key user action, browser) [web]
The number of load actions that are marked as key user actions.
Action count - XHR action (by key user action, browser) [web]
The number of XHR actions that are marked as key user actions.
Cumulative Layout Shift - load action (by key user action, user type) [web]
The score measuring the unexpected shifting of visible webpage elements. Calculated for load actions that are marked as key user actions.
Cumulative Layout Shift - load action (by key user action, geolocation, user type) [web]
The score measuring the unexpected shifting of visible webpage elements. Calculated for load actions that are marked as key user actions.
Cumulative Layout Shift - load action (by key user action, browser) [web]
The score measuring the unexpected shifting of visible webpage elements. Calculated for load actions that are marked as key user actions.
DOM interactive - load action (by key user action, browser) [web]
The time taken until a page's status is set to "interactive" and it's ready to receive user input. Calculated for load actions that are marked as key user actions.
Action duration - custom action (by key user action, browser) [web]
The duration of custom actions.
Action duration - load action (by key user action, browser) [web]
The duration of load actions that are marked as key user actions.
Action duration - XHR action (by key user action, browser) [web]
The duration of XHR actions that are marked as key user actions.
Time to first byte - load action (by key user action, browser) [web]
The time taken until the first byte of the response is received from the server, relevant application caches, or a local resource. Calculated for load actions that are marked as key user actions.
Time to first byte - XHR action (by key user action, browser) [web]
The time taken until the first byte of the response is received from the server, relevant application caches, or a local resource. Calculated for XHR actions that are marked as key user actions.
First Input Delay - load action (by key user action, user type) [web]
The time from the first interaction with a page to when the user agent can respond to that interaction. Calculated for load actions that are marked as key user actions.
First Input Delay - load action (by key user action, geolocation, user type) [web]
The time from the first interaction with a page to when the user agent can respond to that interaction. Calculated for load actions that are marked as key user actions.
First Input Delay - load action (by key user action, browser) [web]
The time from the first interaction with a page to when the user agent can respond to that interaction. Calculated for load actions that are marked as key user actions.
Largest Contentful Paint - load action (by key user action, user type) [web]
The time taken to render the largest element in the viewport. Calculated for load actions that are marked as key user actions.
Largest Contentful Paint - load action (by key user action, geolocation, user type) [web]
The time taken to render the largest element in the viewport. Calculated for load actions that are marked as key user actions.
Largest Contentful Paint - load action (by key user action, browser) [web]
The time taken to render the largest element in the viewport. Calculated for load actions that are marked as key user actions.
Load event end - load action (by key user action, browser) [web]
The time taken to complete the load event of a page. Calculated for load actions that are marked as key user actions.
Load event start - load action (by key user action, browser) [web]
The time taken to begin the load event of a page. Calculated for load actions that are marked as key user actions.
Network contribution - load action (by key user action, user type) [web]
The time taken to request and receive resources (including DNS lookup, redirect, and TCP connect time). Calculated for load actions that are marked as key user actions.
Network contribution - XHR action (by key user action, user type) [web]
The time taken to request and receive resources (including DNS lookup, redirect, and TCP connect time). Calculated for XHR actions that are marked as key user actions.
Response end - load action (by key user action, browser) [web]
(AKA HTML downloaded) The time taken until the user agent receives the last byte of the response or the transport connection is closed, whichever comes first. Calculated for load actions that are marked as key user actions.
Response end - XHR action (by key user action, browser) [web]
The time taken until the user agent receives the last byte of the response or the transport connection is closed, whichever comes first. Calculated for XHR actions that are marked as key user actions.
Server contribution - load action (by key user action, user type) [web]
The time spent on server-side processing for a page. Calculated for load actions that are marked as key user actions.
Server contribution - XHR action (by key user action, user type) [web]
The time spent on server-side processing for a page. Calculated for XHR actions that are marked as key user actions.
Speed index - load action (by key user action, browser) [web]
The score measuring how quickly the visible parts of a page are rendered. Calculated for load actions that are marked as key user actions.
Visually complete - load action (by key user action, browser) [web]
The time taken to fully render content in the viewport. Calculated for load actions that are marked as key user actions.
Visually complete - XHR action (by key user action, browser) [web]
The time taken to fully render content in the viewport. Calculated for XHR actions that are marked as key user actions.
Error count (by key user action, user type, error type, error origin) [web]
The number of detected errors that occurred during key user actions.
User action count with errors (by key user action, user type) [web]
The number of key user actions with detected errors.
JavaScript errors count during user actions (by key user action, user type) [web]
The number of detected JavaScript errors that occurred during key user actions.
JavaScript error count without user actions (by key user action, user type) [web]
The number of detected standalone JavaScript errors (occurred between key user actions).
User action rate - affected by errors (by key user action, user type) [web]
The percentage of key user actions with detected errors.
Action count - custom action (by browser) [web]
The number of custom actions.
Action count - load action (by browser) [web]
The number of load actions.
Action count - XHR action (by browser) [web]
The number of XHR actions.
Action count (by Apdex category) [web]
The number of user actions.
Action with key performance metric count (by action type, geolocation, user type) [web]
The number of user actions that have a key performance metric and mapped geolocation.
Action duration - custom action (by browser) [web]
The duration of custom actions.
Action duration - load action (by browser) [web]
The duration of load actions.
Action duration - XHR action (by browser) [web]
The duration of XHR actions.
Actions per session average (by users, user type) [web]
The average number of user actions per user session.
Session count - estimated active sessions (by users, user type) [web]
The estimated number of active user sessions. An active session is one in which a user has been confirmed to still be active at a given time. For this high-cardinality metric, the HyperLogLog algorithm is used to approximate the session count.
User count - estimated active users (by users, user type) [web]
The estimated number of unique active users. For this high-cardinality metric, the HyperLogLog algorithm is used to approximate the user count.
User action rate - affected by JavaScript errors (by user type) [web]
The percentage of user actions with detected JavaScript errors.
Apdex (by user type) [web]
Apdex (by geolocation, user type) [web]
The average Apdex rating for user actions that have a mapped geolocation.
Bounce rate (by users, user type) [web]
The percentage of sessions in which users viewed only a single page and triggered only a single web request. Calculated by dividing single-page sessions by all sessions.
Conversion rate - sessions (by users, user type) [web]
The percentage of sessions in which at least one conversion goal was reached. Calculated by dividing converted sessions by all sessions.
Session count - converted sessions (by users, user type) [web]
The number of sessions in which at least one conversion goal was reached.
Cumulative Layout Shift - load action (by user type) [web]
The score measuring the unexpected shifting of visible webpage elements. Calculated for load actions.
Cumulative Layout Shift - load action (by geolocation, user type) [web]
The score measuring the unexpected shifting of visible webpage elements. Calculated for load actions.
Cumulative Layout Shift - load action (by browser) [web]
The score measuring the unexpected shifting of visible webpage elements. Calculated for load actions.
DOM interactive - load action (by browser) [web]
The time taken until a page's status is set to "interactive" and it's ready to receive user input. Calculated for load actions.
Session count - estimated ended sessions (by users, user type) [web]
The number of completed user sessions.
Rage click count [web]
The number of detected rage clicks.
Time to first byte - load action (by browser) [web]
The time taken until the first byte of the response is received from the server, relevant application caches, or a local resource. Calculated for load actions.
Time to first byte - XHR action (by browser) [web]
The time taken until the first byte of the response is received from the server, relevant application caches, or a local resource. Calculated for XHR actions.
First Input Delay - load action (by user type) [web]
The time from the first interaction with a page to when the user agent can respond to that interaction. Calculated for load actions.
First Input Delay - load action (by geolocation, user type) [web]
The time from the first interaction with a page to when the user agent can respond to that interaction. Calculated for load actions.
First Input Delay - load action (by browser) [web]
The time from the first interaction with a page to when the user agent can respond to that interaction. Calculated for load actions.
Largest Contentful Paint - load action (by user type) [web]
The time taken to render the largest element in the viewport. Calculated for load actions.
Largest Contentful Paint - load action (by geolocation, user type) [web]
The time taken to render the largest element in the viewport. Calculated for load actions.
Largest Contentful Paint - load action (by browser) [web]
The time taken to render the largest element in the viewport. Calculated for load actions.
Load event end - load action (by browser) [web]
The time taken to complete the load event of a page. Calculated for load actions.
Load event start - load action (by browser) [web]
The time taken to begin the load event of a page. Calculated for load actions.
Network contribution - load action (by user type) [web]
The time taken to request and receive resources (including DNS lookup, redirect, and TCP connect time). Calculated for load actions.
Network contribution - XHR action (by user type) [web]
The time taken to request and receive resources (including DNS lookup, redirect, and TCP connect time). Calculated for XHR actions.
Response end - load action (by browser) [web]
(AKA HTML downloaded) The time taken until the user agent receives the last byte of the response or the transport connection is closed, whichever comes first. Calculated for load actions.
Response end - XHR action (by browser) [web]
The time taken until the user agent receives the last byte of the response or the transport connection is closed, whichever comes first. Calculated for XHR actions.
Server contribution - load action (by user type) [web]
The time spent on server-side processing for a page. Calculated for load actions.
Server contribution - XHR action (by user type) [web]
The time spent on server-side processing for a page. Calculated for XHR actions.
Session duration (by users, user type) [web]
The average duration of user sessions.
Speed index - load action (by browser) [web]
The score measuring how quickly the visible parts of a page are rendered. Calculated for load actions.
Session count - estimated started sessions (by users, user type) [web]
The number of started user sessions.
Visually complete - load action (by browser) [web]
The time taken to fully render content in the viewport. Calculated for load actions.
Visually complete - XHR action (by browser) [web]
The time taken to fully render content in the viewport. Calculated for XHR actions.
Error count (by user type, error type, error origin) [web]
The number of detected errors.
Error count during user actions (by user type, error type, error origin) [web]
The number of detected errors that occurred during user actions.
Standalone error count (by user type, error type, error origin) [web]
The number of detected standalone errors (occurred between user actions).
User action count - with errors (by user type) [web]
The number of key user actions with detected errors.
Error count for Davis (by user type, error type, error origin, error context)) [web]
The number of errors that were included in Davis AI problem detection and analysis.
Interaction to next paint
JavaScript error count - during user actions (by user type) [web]
The number of detected JavaScript errors that occurred during user actions.
JavaScript error count - without user actions (by user type) [web]
The number of detected standalone JavaScript errors (occurred between user actions).
User action rate - affected by errors (by user type) [web]
The percentage of user actions with detected errors.
Apdex (by OS, geolocation) [mobile, custom]
The Apdex rating for all captured user actions.
Apdex (by OS, app version) [mobile, custom]
The Apdex rating for all captured user actions.
User count - estimated users affected by crashes (by OS) [mobile, custom]
The estimated number of unique users affected by a crash. For this high cardinality metric, the HyperLogLog algorithm is used to approximate the number of users.
User count - estimated users affected by crashes (by OS, app version) [mobile, custom]
The estimated number of unique users affected by a crash. For this high cardinality metric, the HyperLogLog algorithm is used to approximate the number of users.
User rate - estimated users affected by crashes (by OS) [mobile, custom]
The estimated percentage of unique users affected by a crash. For this high cardinality metric, the HyperLogLog algorithm is used to approximate the number of users.
Crash count (by OS, geolocation) [mobile, custom]
The number of detected crashes.
Crash count (by OS, app version) [mobile, custom]
The number of detected crashes.
Crash count (by OS, app version) [mobile, custom]
The number of detected crashes.
User rate - estimated crash free users (by OS) [mobile, custom]
The estimated percentage of unique users not affected by a crash. For this high cardinality metric, the HyperLogLog algorithm is used to approximate the number of users.
Apdex (by key user action, OS) [mobile, custom]
The Apdex rating for all captured key user actions.
Action count (by key user action, OS, Apdex category) [mobile, custom]
The number of captured key user actions.
Action duration (by key user action, OS) [mobile, custom]
The duration of key user actions.
Reported error count (by key user action, OS) [mobile, custom]
The number of reported errors for key user actions.
Request count (by key user action, OS) [mobile, custom]
The number of captured web requests associated with key user actions.
Request duration (by key user action, OS) [mobile, custom]
The duration of web requests for key user actions. Be aware that this metric is measured in microseconds while other request duration metrics for mobile and custom apps are measured in milliseconds.
Request error count (by key user action, OS) [mobile, custom]
The number of detected web request errors for key user actions.
Request error rate (by key user action, OS) [mobile, custom]
The percentage of web requests with detected errors for key user actions
New user count (by OS) [mobile, custom]
The number of users that launched the application(s) for the first time. The metric is tied to specific devices, so users are counted multiple times if they install the application on multiple devices. The metric doesn't distinguish between multiple users that share the same device and application installation.
Request count (by OS, provider) [mobile, custom]
The number of captured web requests.
Request count (by OS, app version) [mobile, custom]
The number of captured web requests.
Request error count (by OS, provider) [mobile, custom]
The number of detected web request errors.
Request error count (by OS, app version) [mobile, custom]
The number of detected web request errors.
Request error rate (by OS, provider) [mobile, custom]
The percentage of web requests with detected errors.
Request error rate (by OS, app version) [mobile, custom]
The percentage of web requests with detected errors.
Request duration (by OS, provider) [mobile, custom]
The duration of web requests.
Request duration (by OS, app version) [mobile, custom]
The duration of web requests.
Session count (by agent version, OS) [mobile, custom]
The number of captured user sessions.
Session count (by OS, crash reporting level) [mobile, custom]
The number of captured user sessions.
Session count (by OS, data collection level) [mobile, custom]
The number of captured user sessions.
Session count - estimated (by OS, geolocation) [mobile, custom]
The estimated number of captured user sessions. For this high cardinality metric, the HyperLogLog algorithm is used to approximate the number of sessions.
Session count (by OS, app version) [mobile, custom]
The number of captured user sessions.
Action count (by geolocation, Apdex category) [mobile, custom]
The number of captured user actions.
Action count (by OS, Apdex category) [mobile, custom]
The number of captured user actions.
Action count (by OS, app version) [mobile, custom]
The number of captured user actions.
Action duration (by OS, app version) [mobile, custom]
The duration of user actions.
User count - estimated (by OS, geolocation) [mobile, custom]
The estimated number of unique users that have a mapped geolocation. The metric is based on 'internalUserId'. When 'dataCollectionLevel' is set to 'performance' or 'off', 'internalUserId' is changed at each app start. For this high cardinality metric, the HyperLogLog algorithm is used to approximate the number of users.
User count - estimated (by OS, app version) [mobile, custom]
The estimated number of unique users. The metric is based on 'internalUserId'. When 'dataCollectionLevel' is set to 'performance' or 'off', 'internalUserId' is changed at each app start. For this high cardinality metric, the HyperLogLog algorithm is used to approximate the number of users.
Session count - billed and unbilled [custom]
The number of billed and unbilled user sessions. To get only the number of billed sessions, set the "Type" filter to "Billed".
Total user action and session properties
The number of billed user action and user session properties.
Session count - billed and unbilled - with Session Replay [mobile]
The number of billed and unbilled user sessions that include Session Replay data. To get only the number of billed sessions, set the "Type" filter to "Billed".
Session count - billed and unbilled [mobile]
The total number of billed and unbilled user sessions (with and without Session Replay data). To get only the number of billed sessions, set the "Type" filter to "Billed".
Total user action and session properties
The number of billed user action and user session properties.
Session count - billed and unbilled - with Session Replay [web]
The number of billed and unbilled user sessions that include Session Replay data. To get only the number of billed sessions, set the "Type" filter to "Billed".
Session count - billed and unbilled - without Session Replay [web]
The number of billed and unbilled user sessions that do not include Session Replay data. To get only the number of billed sessions, set the "Type" filter to "Billed".
Total user action and session properties
The number of billed user action and user session properties.
(DPS) Total Custom Events Classic billing usage
The number of custom events ingested aggregated over all monitored entities. Custom events include events sent to Dynatrace via the Events API or events created by a log event extraction rule. Use this total metric to query longer timeframes without losing precision or performance.
(DPS) Custom Events Classic billing usage by monitored entity
The number of custom events ingested split by monitored entity. Custom events include events sent to Dynatrace via the Events API or events created by a log event extraction rule. For details on the events billed, refer to the usage_by_event_info metric. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Custom Events Classic billing usage by event info
The number of custom events ingested split by event info. Custom events include events sent to Dynatrace via the Events API or events created by a log event extraction rule. The info contains the context of the event plus the configuration ID. For details on the related monitored entities, refer to the usage_by_entity metric. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Recorded metric data points per metric key
The number of reported metric data points split by metric key. This metric does not account for included metric data points available to your environment.
(DPS) Total billed metric data points
The total number of metric data points after deducting the included metric data points. This is the rate-card value used for billing. Use this total metric to query longer timeframes without losing precision or performance.
(DPS) Total metric data points billable for Foundation & Discovery hosts
The number of metric data points billable for Foundation & Discovery hosts.
(DPS) Total metric data points billed for Full-Stack hosts
The number of metric data points billed for Full-Stack hosts. To view the unadjusted usage per host, use builtin:billing.full_stack_monitoring.metric_data_points.ingested_by_host . This trailing metric is reported at 15-minute intervals with up to a 15-minute delay.
(DPS) Total metric data points billed for Infrastructure-monitored hosts
The number of metric data points billed for Infrastructure-monitored hosts. To view the unadjusted usage per host, use builtin:billing.infrastructure_monitoring.metric_data_points.ingested_by_host . This trailing metric is reported at 15-minute intervals with up to a 15-minute delay.
(DPS) Total metric data points billed by other entities
The number of metric data points billed that cannot be assigned to a host. The values reported in this metric are not eligible for included metric deduction and will be billed as is. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. o view the monitored entities that consume this usage, use the other_by_entity metric.
(DPS) Billed metric data points reported and split by other entities
The number of billed metric data points split by entities that cannot be assigned to a host. The values reported in this metric are not eligible for included metric deduction and will be billed as is. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Total Custom Traces Classic billing usage
The number of spans ingested aggregated over all monitored entities. A span is a single operation within a distributed trace, ingested into Dynatrace. Use this total metric to query longer timeframes without losing precision or performance.
(DPS) Custom Traces Classic billing usage by monitored entity
The number of spans ingested split by monitored entity. A span is a single operation within a distributed trace, ingested into Dynatrace. For details on span types, refer to the usage_by_span_type metric. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Custom Traces Classic billing usage by span type
The number of spans ingested split by span type. A span is a single operation within a distributed trace, ingested into Dynatrace. Span kinds can be CLIENT, SERVER, PRODUCER, CONSUMER or INTERNAL For details on the related monitored entities, refer to the usage_by_entity metric. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
DDU events consumption by event info
License consumption of Davis data units by events pool split by event info
DDU events consumption by monitored entity
License consumption of Davis data units by events pool split by monitored entity
Total DDU events consumption
Sum of license consumption of Davis data units aggregated over all monitored entities for the events pool
DDU log consumption by log path
License consumption of Davis data units by log pool split by log path
DDU log consumption by monitored entity
License consumption of Davis data units by log pool split by monitored entity
Total DDU log consumption
Sum of license consumption of Davis data units aggregated over all logs for the log pool
DDU metrics consumption by monitored entity
License consumption of Davis data units by metrics pool split by monitored entity
DDU metrics consumption by monitored entity w/o host-unit included DDUs
License consumption of Davis data units by metrics pool split by monitored entity (aggregates host-unit included metrics, so value might be higher than actual consumption)
Reported metrics DDUs by metric key
Reported Davis data units usage by metrics pool split by metric key
Total DDU metrics consumption
Sum of license consumption of Davis data units aggregated over all metrics for the metrics pool
DDU serverless consumption by function
License consumption of Davis data units by serverless pool split by Amazon Resource Names (ARNs)
DDU serverless consumption by service
License consumption of Davis data units by serverless pool split by service
Total DDU serverless consumption
Sum of license consumption of Davis data units aggregated over all services for the serverless pool
DDU traces consumption by span type
License consumption of Davis data units by traces pool split by SpanKind, as defined in OpenTelemetry specification
DDU traces consumption by monitored entity
License consumption of Davis data units by traces pool split by monitored entity
Total DDU traces consumption
Sum of license consumption of Davis data units aggregated over all monitored entities for the traces pool
DDU included per host
Included Davis data units per host
DDU included metric data points per host
Included metric data points per host
[Deprecated] (DPS) Business events usage - Ingest & Process
Business events Ingest & Process usage, tracked as bytes ingested within the hour. This trailing metric is reported hourly for the previous hour. Metric values are reported with up to a one-hour delay. [Deprecated] This metric is replaced by billing usage events.
[Deprecated] (DPS) Business events usage - Query
Business events Query usage, tracked as bytes read within the hour. This trailing metric is reported hourly for the previous hour. Metric values are reported with up to a one-hour delay. [Deprecated] This metric is replaced by billing usage events.
[Deprecated] (DPS) Business events usage - Retain
Business events Retain usage, tracked as total storage used within the hour, in bytes. This trailing metric is reported hourly for the previous hour. Metric values are reported with up to a one-hour delay. [Deprecated] This metric is replaced by billing usage events.
(DPS) Ingested metric data points for Foundation & Discovery
The number of metric data points aggregated over all Foundation & Discovery hosts.
(DPS) Ingested metric data points for Foundation & Discovery per host
The number of metric data points split by Foundation & Discovery hosts.
(DPS) Foundation & Discovery billing usage
The total number of host-hours being monitored by Foundation & Discovery, counted in 15 min intervals.
(DPS) Foundation & Discovery billing usage per host
The host-hours being monitored by Foundation & Discovery, counted in 15 min intervals.
(DPS) Available included metric data points for Full-Stack hosts
The total number of included metric data points that can be deducted from the metric data points reported by Full-Stack hosts. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. To view the number of applied included metric data points, use builtin:billing.full_stack_monitoring.metric_data_points.included_used . If the difference between this metric and the applied metrics is greater than 0, then more metrics can be ingested using Full-Stack Monitoring without incurring additional costs.
(DPS) Used included metric data points for Full-Stack hosts
The number of consumed included metric data points per host monitored with Full-Stack Monitoring. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. To view the number of potentially available included metrics, use builtin:billing.full_stack_monitoring.metric_data_points.included_used . If the difference between this metric and the available metrics is greater than zero, then that means that more metrics could be ingested on Full-Stack hosts without incurring additional costs.
(DPS) Total metric data points reported by Full-Stack hosts
The number of metric data points aggregated over all Full-Stack hosts. The values reported in this metric are eligible for included-metric-data-point deduction. Use this total metric to query longer timeframes without losing precision or performance. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. To view usage on a per-host basis, use builtin:billing.full_stack_monitoring.metric_data_points.ingested_by_host .
(DPS) Metric data points reported and split by Full-Stack hosts
The number of metric data points split by Full-Stack hosts. The values reported in this metric are eligible for included-metric-data-point deduction. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. The pool of available included metrics for a "15-minute interval" is visible via builtin:billing.full_stack_monitoring.metric_data_points.included . To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Full-Stack Monitoring billing usage
The total GiB memory of hosts being monitored in full-stack mode, counted in 15 min intervals. Use this total metric to query longer timeframes without losing precision or performance. For details on the hosts causing the usage, refer to the usage_per_host metric. For details on the containers causing the usage, refer to the usage_per_container metric.
(DPS) Full-stack usage by container type
The total GiB memory of containers being monitored in full-stack mode, counted in 15 min intervals.
(DPS) Full-Stack Monitoring billing usage per host
The GiB memory per host being monitored in full-stack mode, counted in 15 min intervals. For example, a host with 8 GiB of RAM monitored for 1 hour has 4 data points with a value of 2
. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Available included metric data points for Infrastructure-monitored hosts
The total number of included metric data points that can be deducted from the metric data points reported by Infrastructure-monitored hosts. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. To view the number of applied included metric data points, use builtin:billing.infrastructure_monitoring.metric_data_points.included_used . If the difference between this metric and the applied metrics is greater than zero, then that means that more metrics could be ingested on Infrastructure-monitored hosts without incurring additional costs.
(DPS) Used included metric data points for Infrastructure-monitored hosts
The number of consumed included metric data points for Infrastructure-monitored hosts. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. To view the number of potentially available included metrics, use builtin:billing.infrastructure_monitoring.metric_data_points.included_used . If the difference between this metric and the available metrics is greater than zero, then that means that more metrics could be ingested on Infrastructure-monitored hosts without incurring additional costs.
(DPS) Total metric data points reported by Infrastructure-monitored hosts
The number of metric data points aggregated over all Infrastructure-monitored hosts. The values reported in this metric are eligible for included-metric-data-point deduction. Use this total metric to query longer timeframes without losing precision or performance. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. To view usage on a per-host basis, use the builtin:billing.full_stack_monitoring.metric_data_points.ingested_by_host .
(DPS) Metric data points reported and split by Infrastructure-monitored hosts
The number of metric data points split by Infrastructure-monitored hosts. The values reported in this metric are eligible for included-metric-data-point deduction. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. The pool of available included metrics for a "15-minute interval" is visible via builtin:billing.infrastructure_monitoring.metric_data_points.included . To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Infrastructure Monitoring billing usage
The total number of host-hours being monitored in infrastructure-only mode, counted in 15 min intervals. Use this total metric to query longer timeframes without losing precision or performance. For details on the hosts causing the usage, refer to the usage_per_host metric.
(DPS) Infrastructure Monitoring billing usage per host
The host-hours being monitored in infrastructure-only mode, counted in 15 min intervals. A host monitored for the whole hour has 4 data points with a value of 0.25, regardless of the memory size. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Kubernetes Platform Monitoring billing usage
The total number of monitored Kubernetes pods per hour, split by cluster and namespace and counted in 15 min intervals. A pod monitored for the whole hour has 4 data points with a value of 0.25.
(DPS) Log Management and Analytics usage - Ingest & Process
Log Management and Analytics Ingest & Process usage, tracked as bytes ingested within the hour. This trailing metric is reported hourly for the previous hour. Metric values are reported with up to a one-hour delay.
(DPS) Log Management and Analytics usage - Query
Log Management and Analytics Query usage, tracked as bytes read within the hour. This trailing metric is reported hourly for the previous hour. Metric values are reported with up to a one-hour delay.
(DPS) Log Management and Analytics usage - Retain
Log Management and Analytics Retain usage, tracked as total storage used within the hour, in bytes. This trailing metric is reported hourly for the previous hour. Metric values are reported with up to a one-hour delay.
(DPS) Total Log Monitoring Classic billing usage
The number of log records ingested aggregated over all monitored entities. A log record is recognized by either a timestamp or a JSON object. Use this total metric to query longer timeframes without losing precision or performance.
(DPS) Log Monitoring Classic billing usage by monitored entity
The number of log records ingested split by monitored entity. A log record is recognized by either a timestamp or a JSON object. For details on the log path, refer to the usage_by_log_path metric. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Log Monitoring Classic billing usage by log path
The number of log records ingested split by log path. A log record is recognized by either a timestamp or a JSON object. For details on the related monitored entities, refer to the usage_by_entity metric. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Mainframe Monitoring billing usage
The total number of MSU-hours being monitored, counted in 15 min intervals.
(DPS) Total Real-User Monitoring Property (mobile) billing usage
(Mobile) User action and session properties count. For details on how usage is calculated, refer to the documentation or builtin:billing.real_user_monitoring.web.property.usage_by_application . Use this total metric to query longer timeframes without losing precision or performance.
(DPS) Real-User Monitoring Property (mobile) billing usage by application
(Mobile) User action and session properties count by application. The billed value is calculated based on the number of sessions reported in builtin:billing.real_user_monitoring.mobile.session.usage_by_app + builtin:billing.real_user_monitoring.mobile.session_with_replay.usage_by_app . plus the number of configured properties that exceed the included number of properties (free of charge) offered for a given application. Data points are only written for billed sessions. If the value is 0, you have available metric data points. This trailing metric is reported hourly for the previous hour. Metric values are reported with up to a one-hour delay. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Total Real-User Monitoring (mobile) billing usage
(Mobile) Session count without Session Replay. The value billed for each session is the session duration measured in hours. So a 3-hour session results in a single data-point value of 3
. If two sessions end during the same minute, then the values are added together. Use this total metric to query longer timeframes without losing precision or performance. To view the application that consume this usage, refer to the usage_by_app metric.
(DPS) Real-User Monitoring (mobile) billing usage by application
(Mobile) Session count without Session Replay split by application The value billed for each session is the session duration measured in hours. So a 3-hour session results in a single data-point value of 3
. If two sessions of the same application end during the same minute, then the values are added together. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Total Real-User Monitoring (mobile) with Session Replay billing usage
(Mobile) Session count with Session Replay. The value billed for each session is the session duration measured in hours. So a 3-hour session results in a single data-point value of 3
. If two sessions end during the same minute, then the values are added together. Use this total metric to query longer timeframes without losing precision or performance. To view the application that consume this usage, refer to the usage_by_app metric.
(DPS) Real-User Monitoring (mobile) with Session Replay billing usage by application
(Mobile) Session count with Session Replay split by application. The value billed for each session is the session duration measured in hours. So a 3-hour session results in a single data-point value of 3
. If two sessions of the same application end during the same minute, then the values are added together. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Total Real-User Monitoring Property (web) billing usage
(Web) User action and session properties count. For details on how usage is calculated, refer to the documentation or builtin:billing.real_user_monitoring.web.property.usage_by_application . Use this total metric to query longer timeframes without losing precision or performance.
(DPS) Real-User Monitoring Property (web) billing usage by application
(Web) User action and session properties count by application. The billed value is calculated based on the number of sessions reported in builtin:billing.real_user_monitoring.web.session.usage_by_app + builtin:billing.real_user_monitoring.web.session_with_replay.usage_by_app . plus the number of configured properties that exceed the included number of properties (free of charge) offered for a given application. Data points are only written for billed sessions. If the value is 0, you have available metric data points. This trailing metric is reported hourly for the previous hour. Metric values are reported with up to a one-hour delay. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Total Real-User Monitoring (web) billing usage
(Web) Session count without Session Replay. The value billed for each session is the session duration measured in hours. So a 3-hour session results in a single data-point value of 3
. If two sessions end during the same minute, then the values are added together. Use this total metric to query longer timeframes without losing precision or performance. To view the application that consume this usage, refer to the usage_by_app metric.
(DPS) Real-User Monitoring (web) billing usage by application
(Web) Session count without Session Replay split by application. The value billed for each session is the session duration measured in hours. So a 3-hour session results in a single data-point value of 3
. If two sessions of the same application end during the same minute, then the values are added together. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Total Real-User Monitoring (web) with Session Replay billing usage
(Web) Session count with Session Replay. The value billed for each session is the session duration measured in hours. So a 3-hour session results in a single data-point value of 3
. If two sessions end during the same minute, then the values are added together. Use this total metric to query longer timeframes without losing precision or performance. To view the application that consume this usage, refer to the usage_by_app metric.
(DPS) Real-User Monitoring (web) with Session Replay billing usage by application
(Web) Session count with Session Replay split by application. The value billed for each session is the session duration measured in hours. So a 3-hour session results in a single data-point value of 3
. If two sessions of the same application end during the same minute, then the values are added together. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Runtime Application Protection billing usage
Total GiB-memory of hosts protected with Runtime Application Protection (Application Security), counted at 15-minute intervals. Use this total metric to query longer timeframes without losing precision or performance. For details on the monitored hosts, refer to the usage_per_host metric.
(DPS) Runtime Application Protection billing usage per host
GiB-memory per host protected with Runtime Application Protection (Application Security), counted at 15-minute intervals. For example, a host with 8 GiB of RAM monitored for 1 hour has 4 data points with a value of 2
. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Runtime Vulnerability Analytics billing usage
Total GiB-memory of hosts protected with Runtime Vulnerability Analytics (Application Security), counted at 15-minute intervals. Use this total metric to query longer timeframes without losing precision or performance. For details on the monitored hosts, refer to the usage_per_host metric.
(DPS) Runtime Vulnerability Analytics billing usage per host
GiB-memory per hosts protected with Runtime Vulnerability Analytics (Application Security), counted at 15-minute intervals. For example, a host with 8 GiB of RAM monitored for 1 hour has 4 data points with a value of 2
. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Total Serverless Functions Classic billing usage
The number of invocations of the serverless function aggregated over all monitored entities. The term "function invocations" is equivalent to "function requests" or "function executions". Use this total metric to query longer timeframes without losing precision or performance.
(DPS) Serverless Functions Classic billing usage by monitored entity
The number of invocations of the serverless function split by monitored entity. The term "function invocations" is equivalent to "function requests" or "function executions". For details on which functions are invoked, refer to the usage_by_function metric. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Serverless Functions Classic billing usage by function
The number of invocations of the serverless function split by function. The term "function invocations" is equivalent to "function requests" or "function executions". For details on the related monitored entities, refer to the usage_by_entity metric. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
Actions
The number of billed actions consumed by browser monitors.
(DPS) Total Browser Monitor or Clickpath billing usage
The number of synthetic actions which triggers a web request that includes a page load, navigation event, or action that triggers an XHR or Fetch request. Scroll downs, keystrokes, or clicks that don't trigger web requests aren't counted as such actions. Use this total metric to query longer timeframes without losing precision or performance.
(DPS) Browser Monitor or Clickpath billing usage per synthetic browser monitor
The number of synthetic actions which triggers a web request that includes a page load, navigation event, or action that triggers an XHR or Fetch request. Scroll downs, keystrokes, or clicks that don't trigger web requests aren't counted as such actions. Actions are split by the Synthetic Browser Monitors that caused them. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
Third-party results
The number of billed results consumed by third-party monitors.
(DPS) Total Third-Party Synthetic API Ingestion billing usage
The number of synthetic test results pushed into Dynatrace with Synthetic 3rd party API. Use this total metric to query longer timeframes without losing precision or performance.
(DPS) Third-Party Synthetic API Ingestion billing usage per external browser monitor
The number of synthetic test results pushed into Dynatrace with Synthetic 3rd party API. The ingestions are split by external Synthetic Browser Monitors for which the results where ingested. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
Requests
The number of billed requests consumed by HTTP monitors.
(DPS) Total HTTP monitor billing usage
The number of HTTP requests performed during execution of synthetic HTTP monitor. Use this total metric to query longer timeframes without losing precision or performance.
(DPS) HTTP monitor billing usage per HTTP monitor
The number of HTTP requests performed, split by synthetic HTTP monitor. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
Number of running EC2 instances (AZ)
Number of starting VMs in region
Number of active VMs in region
Number of stopped VMs in region
Number of starting VMs in scale set
Number of active VMs in scale set
Number of stopped VMs in scale set
CF: Time to fetch cell states
The time that the auctioneer took to fetch state from all the cells when running its auction.
CF: App instance placement failures
The number of application instances that the auctioneer failed to place on Diego cells.
CF: App instance starts
The number of application instances that the auctioneer successfully placed on Diego cells.
CF: Task placement failures
The number of tasks that the auctioneer failed to place on Diego cells.
CF: 502 responses
The number of responses that indicate invalid service responses produced by an application.
CF: Response latency
The average response time from the application to clients.
CF: 5xx responses
The number of responses that indicate repeatedly crashing apps or response issues from applications.
CF: Total requests
The number of all requests representing the overall traffic flow.
CPU usage
Disk allocation
Disk capacity
Memory resident
Memory usage
Network incoming bytes rate
Network outgoing bytes rate
Host CPU usage %
Host disk usage rate
Host disk commands aborted
Host disk queue latency
Host disk read IOPS
Host disk read latency
Host disk read rate
Host disk write IOPS
Host disk write latency
Host disk write rate
Host compression rate
Host memory consumed
Host decompression rate
Host swap in rate
Host swap out rate
Host network data received rate
Host network data transmitted rate
Data received rate
Data transmitted rate
Packets received dropped
Packets transmitted dropped
Number of VMs
Number of VMs powered-off
Number of VMs suspended
Host availability %
VM CPU ready %
VM swap wait
VM CPU usage MHz
VM CPU usage %
VM disk usage rate
VM memory active
VM compression rate
VM memory consumed
VM decompression rate
VM swap in rate
VM swap out rate
VM network data received rate
VM network data transmitted rate
Containers: CPU limit, mCores
CPU resource limit per container in millicores.
Containers: CPU logical cores
Number of logical CPU cores of the host.
Containers: CPU shares
Number of CPU shares allocated per container.
Containers: CPU throttling, mCores
CPU throttling per container in millicores.
Containers: CPU throttled time, ns/min
Total amount of time a container has been throttled, in nanoseconds per minute.
Containers: CPU usage, mCores
CPU usage per container in millicores
Containers: CPU usage, % of limit
Percent CPU usage per container relative to CPU resource limit. Logical cores are used if CPU limit isn't set.
Containers: CPU system usage, mCores
CPU system usage per container in millicores.
Containers: CPU system usage time, ns/min
Used system time per container in nanoseconds per minute.
Containers: CPU usage time, ns/min
Sum of used system and user time per container in nanoseconds per minute.
Containers: CPU user usage, mCores
CPU user usage per container in millicores.
Containers: CPU user usage time, ns/min
Used user time per container in nanoseconds per minute.
Containers: Memory cache, bytes
Page cache memory per container in bytes.
Containers: Memory limit, bytes
Memory limit per container in bytes. If no limit is set, this is an empty value.
Containers: Memory limit, % of physical memory
Percent memory limit per container relative to total physical memory. If no limit is set, this is an empty value.
Containers: Memory - out of memory kills
Number of out of memory kills for a container.
Containers: Memory - total physical memory, bytes
Total physical memory on the host in bytes.
Containers: Memory usage, bytes
Resident set size (Linux) or private working set size (Windows) per container in bytes.
Containers: Memory usage, % of limit
Resident set size (Linux) or private working set size (Windows) per container in percent relative to container memory limit. If no limit is set, this equals total physical memory.
Container bytes received
Container bytes transmitted
Container cpu usage
Devicemapper data space available
Devicemapper data space used
Devicemapper meta-data space available
Devicemapper meta-data space used
Memory percent
Container memory usage
Number of containers launched
Number of containers running
Number of containers running
Number of containers terminated
Container throttled time
Dashboard view count
Host availability
Host availability state metric reported in 1 minute intervals
z/OS General CPU usage
The percent of the general-purpose central processor (GCP) used
z/OS Rolling 4 hour MSU average
The 4h average of consumed million service units on this LPAR
z/OS MSU capacity
The over all capacity of million service units on this LPAR
z/OS zIIP eligible time
The zIIP eligible time spent on the general-purpose central processor (GCP) after process start per minute
AIX Entitlement configured
Capacity Entitlement is the number of virtual processors assigned to the AIX partition. It’s measured in fractions of processor equal to 0.1 or 0.01. For more information about entitlement, see Assigning the appropriate processor entitled capacity in official IMB documentation.
AIX Entitlement used
Percentage of entitlement used. Capacity Entitlement is the number of virtual cores assigned to the AIX partition. See For more information about entitlement, see Assigning the appropriate processor entitled capacity in official IMB documentation.
CPU idle
Average CPU time, when the CPU didn't have anything to do
CPU I/O wait
Percentage of time when CPU was idle during which the system had an outstanding I/O request. It is not available on Windows.
System load
The average number of processes that are being executed by CPU or waiting to be executed by CPU over the last minute
System load15m
The average number of processes that are being executed by CPU or waiting to be executed by CPU over the last 15 minutes
System load5m
The average number of processes that are being executed by CPU or waiting to be executed by CPU over the last 5 minutes
CPU other
Average CPU time spent on other tasks: servicing interrupt requests (IRQ), running virtual machines under the control of the host's kernel (It means the host is a hypervisor for VMs). It's available only for Linux hosts
AIX Physical consumed
Total CPUs consumed by the AIX partition
CPU steal
Average CPU time, when a virtual machine waits to get CPU cycles from the hypervisor. In a virtual environment, CPU cycles are shared across virtual machines on the hypervisor server. If your virtualized host displays a high CPU steal, it means CPU cycles are being taken away from your virtual machine to serve other purposes. It may indicate an overloaded hypervisor. It's available only for Linux hosts
CPU system
Average CPU time when CPU was running in kernel mode
CPU usage %
Percentage of CPU time when CPU was utilized. A value close to 100% means most host processing resources are in use, and host CPUs can’t handle additional work
CPU user
Average CPU time when CPU was running in user mode
Number of DNS errors by type
The number of DNS errors by type
Number of orphaned DNS responses
The number of orphaned DNS responses on the host
Number of DNS queries
The number of DNS queries on the host
DNS query time sum
The time of all DNS queries on the host
DNS query time
The average time of DNS query. Calculated with DNS query time sum divided by number of DNS queries for each host and dns server pair.
DNS query time by DNS server
The weighted average time of DNS query by DNS server ip. Calculated with DNS query time sum divided by number of DNS queries. It weights the result taking into account number of requests from each host.
DNS query time on host
The weighted average time of DNS query on a host. Calculated with DNS query time sum divided by number of DNS queries on a host. It weights the result taking into account number of requests to each dns server
Disk throughput read
File system read throughput in bits per second
Disk throughput write
File system write throughput in bits per second
Disk available
Amount of free space available for user in file system. On Linux and AIX it is free space available for unprivileged user. It doesn't contain part of free space reserved for the root.
Disk read bytes per second
Speed of read from file system in bytes per second
Disk write bytes per second
Speed of write to file system in bytes per second
Disk available %
Percentage of free space available for user in file system. On Linux and AIX it is % of free space available for unprivileged user. It doesn't contain part of free space reserved for the root.
Inodes available %
Percentage of free inodes available for unprivileged user in file system. Metric not available on Windows.
Inodes total
Total amount of inodes available for unprivileged user in file system. Metric not available on Windows.
Disk average queue length
Average number of read and write operations in disk queue
Disk read operations per second
Number of read operations from file system per second
Disk read time
Average time of read from file system. It shows average disk latency during read.
Disk used
Amount of used space in file system
Disk used %
Percentage of used space in file system
Disk utilization time
Percent of time spent on disk I/O operations
Disk write operations per second
Number of write operations to file system per second
Disk write time
Average time of write to file system. It shows average disk latency during write.
File descriptors max
Maximum amount of file descriptors for use
File descriptors used
Amount of file descriptors used
AIX Kernel threads blocked
Length of the swap queue. The swap queue contains the threads ready to run but swapped out with the currently running threads
AIX Kernel threads I/O event wait
Number of threads that are waiting for file system direct (cio) + Number of processes that are asleep waiting for buffered I/O
AIX Kernel threads I/O message wait
Number of threads that are sleeping and waiting for raw I/O operations at a particular time. Raw I/O operation allows applications to direct write to the Logical Volume Manager (LVM) layer
AIX Kernel threads runnable
Number of runnable threads (running or waiting for run time) (threads ready). The average number of runnable threads is seen in the first column of the vmstat command output
Memory available
The amount of memory (RAM) available on the host. The memory that is available for allocation to new or existing processes. Available memory is an estimation of how much memory is available for use without swapping.
Memory available %
The percentage of memory (RAM) available on the host. The memory that is available for allocation to new or existing processes. Available memory is an estimation of how much memory is available for use without swapping. Shows available memory as percentages.
Page faults per second
The measure of the number of page faults per second on the monitored host. This value includes soft faults and hard faults.
Swap available
The amount of swap memory or swap space (also known as paging, which is the on-disk component of the virtual memory system) available.
Swap total
Amount of total swap memory or total swap space (also known as paging, which is the on-disk component of the virtual memory system) for use.
Swap used
The amount of swap memory or swap space (also known as paging, which is the on-disk component of the virtual memory system) used.
Kernel memory
The memory used by the system kernel. It includes memory used by core components of OS along with any device drivers.Typically, the number will be very small.
Memory reclaimable
The memory usage for specific purposes. Reclaimable memory is calculated as available memory (estimation of how much memory is available for use without swapping) minus free memory (amount of memory that is currently not used for anything). For more information on reclaimable memory, see this blog post.
Memory total
The amount of memory (RAM) installed on the system.
Memory used %
Shows percentage of memory currently used. Used memory is calculated by OneAgent as follows: used = total – available. So the used memory metric displayed in Dynatrace analysis views is not equal to the used memory metric displayed by system tools. At the same time, it’s important to remember that system tools report used memory the way they do due to historical reasons, and that this particular method of calculating used memory isn’t really representative of how the Linux kernel manages memory in modern systems. The difference in these measurements is in fact quite significant, too. Note: Calculated by taking 100% - "Memory available %".
Memory used
Used memory is calculated by OneAgent as follows: used = total – available. So the used memory metric displayed in Dynatrace analysis views is not equal to the used memory metric displayed by system tools. At the same time, it’s important to remember that system tools report used memory the way they do due to historical reasons, and that this particular method of calculating used memory isn’t really representative of how the Linux kernel manages memory in modern systems. The difference in these measurements is in fact quite significant, too.
NIC packets dropped
Network interface packets dropped on the host
NIC received packets dropped
Network interface received packets dropped on the host
NIC sent packets dropped
Network interface sent packets dropped on the host
NIC packet errors
Network interface packet errors on the host
NIC received packet errors
Network interface received packet errors on a host
NIC sent packet errors
Network interface sent packet errors on the host
NIC packets received
Network interface packets received on the host
NIC packets sent
Network interface packets sent on the host
NIC bytes received
Network interface bytes received on the host
NIC bytes sent on host
Network interface bytes sent on the host
NIC connectivity
Network interface connectivity on the host
NIC receive link utilization
Network interface receive link utilization on the host
NIC transmit link utilization
Network interface transmit link utilization on the host
NIC retransmission
Network interface retransmission on the host
NIC received packets retransmission
Network interface retransmission for received packets on the host
NIC sent packets retransmission
Network interface retransmission for sent packets on the host
Traffic
Network traffic on the host
Traffic in
Traffic incoming at the host
Traffic out
Traffic outgoing from the host
Host retransmission base received
Host aggregated process retransmission base received per second
Host retransmission base sent
Host aggregated process retransmission base sent per second
Host retransmitted packets received
Host aggregated process retransmitted packets received per second
Host retransmitted packets sent
Host aggregated process retransmitted packets sent per second
Localhost session reset received
Host aggregated session reset received per second on localhost
Localhost session timeout received
Host aggregated session timeout received per second on localhost
Localhost new session received
Host aggregated new session received per second on localhost
Host session reset received
Host aggregated process session reset received per second
Host session timeout received
Host aggregated process session timeout received per second
Host new session received
Host aggregated process new session received per second
Host bytes received
Host aggregated process bytes received per second
Host bytes sent
Host aggregated process bytes sent per second
OS Service availability
This metric provides the status of the OS service. If the OS service is running, the OS module is reporting "1" as a value of the metric. In any other case, the metric has a value of "0"Note that this metric provides data only from Classic Windows services monitoring (supported only on Windows), currently replaced by the new OS Services monitoring. To learn more, see Classic Windows services monitoring.
OS Process count
This metric shows an average number of processes, over one minute, running on the host. The reported number of processes is based on processes detected by the OS module, read in 10 seconds cycles.
PGI count
This metric shows the number of PGIs created by the OS module every minute. It includes every PGI, even those which are considered not important and are not reported to Dynatrace.
Reported PGI count
This metric shows the number of PGIs created and reported by the OS module every minute. It includes only PGIs, which are considered important and reported to Dynatrace. Important PGIs are those in which OneAgent recognizes the technology, have open network ports, generate significant resource usage, or are created via Declarative process grouping rules. To learn what makes process important, see Which are the most important processes?
z/OS General CPU time
Total General CPU time per minute
z/OS Consumed MSUs per SMF interval (SMF70EDT)
Number of consumed MSUs per SMF interval (SMF70EDT)
z/OS zIIP time
Total zIIP time per minute
z/OS zIIP usage
Actively used zIIP as a percentage of available zIIP
Host availability %
Host availability %
Host uptime
Time since last host boot up. Requires OneAgent 1.259+. The metric is not supported for application-only OneAgent deployments.
Kubernetes: Cluster readyz status
Current status of the Kubernetes API server reported by the /readyz endpoint (0 or 1).
Kubernetes: Container - out of memory (OOM) kill count
This metric measures the out of memory (OOM) kills. The most detailed level of aggregation is container. The value corresponds to the status 'OOMKilled' of a container in the pod resource's container status. The metric is only written if there was at least one container OOM kill.
Kubernetes: Container - restart count
This metric measures the amount of container restarts. The most detailed level of aggregation is container. The value corresponds to the delta of the 'restartCount' defined in the pod resource's container status. The metric is only written if there was at least one container restart.
Kubernetes: Node conditions
This metric describes the status of a Kubernetes node. The most detailed level of aggregation is node.
Kubernetes: Node - CPU allocatable
This metric measures the total allocatable cpu. The most detailed level of aggregation is node. The value corresponds to the allocatable cpu of a node.
Kubernetes: Container - CPU throttled (by node)
This metric measures the total CPU throttling by container. The most detailed level of aggregation is node.
Kubernetes: Container - CPU usage (by node)
This metric measures the total CPU consumed (user usage + system usage) by container. The most detailed level of aggregation is node.
Kubernetes: Pod - CPU limits (by node)
This metric measures the cpu limits. The most detailed level of aggregation is node. The value is the sum of the cpu limits of all app containers of a pod.
Kubernetes: Pod - memory limits (by node)
This metric measures the memory limits. The most detailed level of aggregation is node. The value is the sum of the memory limits of all app containers of a pod.
Kubernetes: Node - memory allocatable
This metric measures the total allocatable memory. The most detailed level of aggregation is node. The value corresponds to the allocatable memory of a node.
Kubernetes: Container - Working set memory (by node)
This metric measures the current working set memory (memory that cannot be reclaimed under pressure) by container. The OOM Killer is invoked if the working set exceeds the limit. The most detailed level of aggregation is node.
Kubernetes: Pod count (by node)
This metric measures the number of pods. The most detailed level of aggregation is node. The value corresponds to the count of all pods.
Kubernetes: Node - pod allocatable count
This metric measures the total number of allocatable pods. The most detailed level of aggregation is node. The value corresponds to the allocatable pods of a node.
Kubernetes: Pod - CPU requests (by node)
This metric measures the cpu requests. The most detailed level of aggregation is node. The value is the sum of the cpu requests of all app containers of a pod.
Kubernetes: Pod - memory requests (by node)
This metric measures the memory requests. The most detailed level of aggregation is node. The value is the sum of the memory requests of all app containers of a pod.
Kubernetes: PVC - available
This metric measures the number of available bytes in the volume. The most detailed level of aggregation is persistent volume claim.
Kubernetes: PVC - capacity
This metric measures the capacity in bytes of the volume. The most detailed level of aggregation is persistent volume claim.
Kubernetes: PVC - used
This metric measures the number of used bytes in the volume. The most detailed level of aggregation is persistent volume claim.
Kubernetes: Resource quota - CPU limits
This metric measures the cpu limit quota. The most detailed level of aggregation is resource quota. The value corresponds to the cpu limits of a resource quota.
Kubernetes: Resource quota - CPU limits used
This metric measures the used cpu limit quota. The most detailed level of aggregation is resource quota. The value corresponds to the used cpu limits of a resource quota.
Kubernetes: Resource quota - memory limits
This metric measures the memory limit quota. The most detailed level of aggregation is resource quota. The value corresponds to the memory limits of a resource quota.
Kubernetes: Resource quota - memory limits used
This metric measures the used memory limits quota. The most detailed level of aggregation is resource quota. The value corresponds to the used memory limits of a resource quota.
Kubernetes: Resource quota - pod count
This metric measures the pods quota. The most detailed level of aggregation is resource quota. The value corresponds to the pods of a resource quota.
Kubernetes: Resource quota - pod used count
This metric measures the used pods quota. The most detailed level of aggregation is resource quota. The value corresponds to the used pods of a resource quota.
Kubernetes: Resource quota - CPU requests
This metric measures the cpu requests quota. The most detailed level of aggregation is resource quota. The value corresponds to the cpu requests of a resource quota.
Kubernetes: Resource quota - CPU requests used
This metric measures the used cpu requests quota. The most detailed level of aggregation is resource quota. The value corresponds to the used cpu requests of a resource quota.
Kubernetes: Resource quota - memory requests
This metric measures the memory requests quota. The most detailed level of aggregation is resource quota. The value corresponds to the memory requests of a resource quota.
Kubernetes: Resource quota - memory requests used
This metric measures the used memory requests quota. The most detailed level of aggregation is resource quota. The value corresponds to the used memory requests of a resource quota.
Kubernetes: Workload conditions
This metric describes the status of a Kubernetes workload. The most detailed level of aggregation is workload.
Kubernetes: Pod - desired container count
This metric measures the number of desired containers. The most detailed level of aggregation is workload. The value is the count of all containers in the pod's specification.
Kubernetes: Container - CPU throttled (by workload)
This metric measures the total CPU throttling by container. The most detailed level of aggregation is workload.
Kubernetes: Container - CPU usage (by workload)
This metric measures the total CPU consumed (user usage + system usage) by container. The most detailed level of aggregation is workload.
Kubernetes: Pod - CPU limits (by workload)
This metric measures the cpu limits. The most detailed level of aggregation is workload. The value is the sum of the cpu limits of all app containers of a pod.
Kubernetes: Pod - memory limits (by workload)
This metric measures the memory limits. The most detailed level of aggregation is workload. The value is the sum of the memory limits of all app containers of a pod.
[Deprecated] Kubernetes: Container - Memory RSS (by workload)
This metric measures the true resident set size (RSS) by container. RSS is the amount of physical memory used by the container's cgroup - either total_rss + total_mapped_file (cgroup v1) or anon + file_mapped (cgroup v2). The most detailed level of aggregation is workload. Deprecated - use builtin:kubernetes.workload.memory_working_set instead.
Kubernetes: Container - Working set memory (by workload)
This metric measures the current working set memory (memory that cannot be reclaimed under pressure) by container. The OOM Killer is invoked if the working set exceeds the limit. The most detailed level of aggregation is workload.
Kubernetes: Workload - desired pod count
This metric measures the number of desired pods. The most detailed level of aggregation is workload. The value corresponds to the 'replicas' defined in a deployment resource and to the 'desiredNumberScheduled' for a daemon set resource's status as example.
Kubernetes: Pod - CPU requests (by workload)
This metric measures the cpu requests. The most detailed level of aggregation is workload. The value is the sum of the cpu requests of all app containers of a pod.
Kubernetes: Pod - memory requests (by workload)
This metric measures the memory requests. The most detailed level of aggregation is workload. The value is the sum of the memory requests of all app containers of a pod.
Kubernetes: Container count
This metric measures the number of containers. The most detailed level of aggregation is workload. The metric counts the number of all containers.
Kubernetes: Event count
This metric counts Kubernetes events. The most detailed level of aggregation is the event reason. The value corresponds to the count of events returned by the Kubernetes events endpoint. This metric depends on Kubernetes event monitoring. It will not show any datapoints for the period in which event monitoring is deactivated.
Kubernetes: Node count
This metric measures the number of nodes. The most detailed level of aggregation is cluster. The value is the count of all nodes.
Kubernetes: Pod count (by workload)
This metric measures the number of pods. The most detailed level of aggregation is workload. The value corresponds to the count of all pods.
Kubernetes: Workload count
This metric measures the number of workloads. The most detailed level of aggregation is namespace. The value corresponds to the count of all workloads.
Process availability
Process availability state metric reported in 1 minute intervals
Process availability %
This metric provides the percentage of time when a process is available. It is sent once per minute with a 10-second granularity - six samples are aggregated every minute. If the process is available for a whole minute, the value is 100%. A 0% value indicates that it is not running. It has a "Process" dimension (dt.entity.process_group_instance).
Process traffic in
This metric provides size of incoming traffic of a process. It helps to identify processes generating high network traffic on a host. The result is expressed in kilobytes. It has a "PID" (process.pid), "Parent PID" (process.parent_pid), "process owner" (process.owner), "process executable name" (process.executable.name), "process executable path" (process.executable.path), "process command line" (process.command_line) and "Process group instance" (dt.entity.process_group_instance) dimensions This metric is collected only if the Process instance snapshot feature is turned on and triggered, and the time this metric is collected for is restricted to feature limits. To learn more, see Process instance snapshots.
Process traffic out
This metric provides size of outgoing traffic of a process. It helps to identify processes generating high network traffic on a host. The result is expressed in kilobytes. It has a "PID" (process.pid), "Parent PID" (process.parent_pid), "process owner" (process.owner), "process executable name" (process.executable.name), "process executable path" (process.executable.path), "process command line" (process.command_line) and "Process group instance" (dt.entity.process_group_instance) dimensions This metric is collected only if the Process instance snapshot feature is turned on and triggered, and the time this metric is collected for is restricted to feature limits. To learn more, see Process instance snapshots.
Process average CPU
This metric provides the percentage of the CPU usage of a process. The metric value is the sum of CPU time every process worker uses divided by the total available CPU time. The result is expressed in percentage. A value of 100% indicates that the process uses all available CPU resources of the host. It has a "PID" (process.pid), "Parent PID" (process.parent_pid), "process owner" (process.owner), "process executable name" (process.executable.name), "process executable path" (process.executable.path), "process command line" (process.command_line) and "Process group instance" (dt.entity.process_group_instance) dimensions. This metric is collected only if the Process instance snapshot feature is turned on and triggered, and the time this metric is collected for is restricted to feature limits. To learn more, see Process instance snapshots.
Process memory
This metric provides the memory usage of a process. It helps to identify processes with high memory resource consumption and memory leaks. The result is expressed in bytes. It has a "PID" (process.pid), "Parent PID" (process.parent_pid), "process owner" (process.owner), "process executable name" (process.executable.name), "process executable path" (process.executable.path), "process command line" (process.command_line) and "Process group instance" (dt.entity.process_group_instance) dimensions This metric is collected only if the Process instance snapshot feature is turned on and triggered, and the time this metric is collected for is restricted to feature limits. To learn more, see Process instance snapshots.
Incoming messages
The number of incoming messages on the queue or topic
Outgoing messages
The number of outgoing messages from the queue or topic
New attacks
Number of attacks that were recently created. The metric supports the management zone selector.
New Muted Security Problems (global)
Number of vulnerabilities that were recently muted. The metric value is independent of any configured management zone (and thus global).
New Open Security Problems (global)
Number of vulnerabilities that were recently created. The metric value is independent of any configured management zone (and thus global).
New Open Security Problems (split by Management Zone)
Number of vulnerabilities that were recently created. The metric value is split by management zone.
Open Security Problems (global)
Number of currently open vulnerabilities seen within the last minute. The metric value is independent of any configured management zone (and thus global).
Open Security Problems (split by Management Zone)
Number of currently open vulnerabilities seen within the last minute. The metric value is split by management zone.
New Resolved Security Problems (global)
Number of vulnerabilities that were recently resolved. The metric value is independent of any configured management zone (and thus global).
Vulnerabilities - affected process groups count (global)
Total number of unique affected process groups across all open vulnerabilities per technology. The metric value is independent of any configured management zone (and thus global).
Vulnerabilities - affected not-muted process groups count (global)
Total number of unique affected process groups across all open, unmuted vulnerabilities per technology. The metric value is independent of any configured management zone (and thus global).
Vulnerabilities - affected entities count
Total number of unique affected entities across all open vulnerabilities. The metric supports the management zone selector.
CPU time
CPU time consumed by a particular request. To learn how Dynatrace calculates service timings, see Service analysis timings.
Service CPU time
CPU time consumed by a particular service. To learn how Dynatrace calculates service timings, see Service analysis timings.
Failed connections
Unsuccessful connection attempts compared to all connection attempts. To learn about database analysis, see Analyze database services.
Connection failure rate
Rate of unsuccessful connection attempts compared to all connection attempts. To learn about database analysis, see Analyze database services.
Successful connections
Total number of database connections successfully established by this service. To learn about database analysis, see Analyze database services.
Connection success rate
Rate of successful connection attempts compared to all connection attempts. To learn about database analysis, see Analyze database services.
Total number of connections
Total number of database connections that were attempted to be established by this service. To learn about database analysis, see Analyze database services.
Number of client side errors
Failed requests for a service measured on client side. To learn about failure detection, see Configure service failure detection.
Failure rate (client side errors)
Number of calls without client side errors
Number of HTTP 5xx errors
HTTP requests with a status code between 500 and 599 for a given key request measured on server side. To learn about failure detection, see Configure service failure detection.
Failure rate (HTTP 5xx errors)
Number of calls without HTTP 5xx errors
Number of HTTP 4xx errors
HTTP requests with a status code between 400 and 499 for a given key request measured on server side. To learn about failure detection, see Configure service failure detection.
Failure rate (HTTP 4xx errors)
Number of calls without HTTP 4xx errors
Number of server side errors
Failed requests for a service measured on server side. To learn about failure detection, see Configure service failure detection.
Failure rate (server side errors)
Number of calls without server side errors
Number of any errors
Failed requests for a service measured on server side or client side. To learn about failure detection, see Configure service failure detection.
Failure rate (any errors)
Number of calls without any errors
Request count - client
Number of requests for a given key request - measured on the client side. This metric is written for each key request. To learn more about key requests, see Monitor key request.
Request count - server
Number of requests for a given key request - measured on the server side. This metric is written for each key request. To learn more about key requests, see Monitor key request.
Request count
Number of requests for a given key request. This metric is written for each key request. To learn more about key requests, see Monitor key request.
CPU per request
CPU time for a given key request. This metric is written for each key request. To learn more about key requests, see Monitor key request.
Service key request CPU time
CPU time for a given key request. This metric is written for each key request. To learn more about key requests, see Monitor key request.
Number of client side errors
Failed requests for a given key request measured on client side. To learn about failure detection, see Configure service failure detection.
Failure rate (client side errors)
Number of calls without client side errors
Number of HTTP 5xx errors
Rate of HTTP requests with a status code between 500 and 599 of a given key request. To learn about failure detection, see Configure service failure detection.
Failure rate (HTTP 5xx errors)
Number of calls without HTTP 5xx errors
Number of HTTP 4xx errors
Rate of HTTP requests with a status code between 400 and 499 of a given key request. To learn about failure detection, see Configure service failure detection.
Failure rate (HTTP 4xx errors)
Number of calls without HTTP 4xx errors
Number of server side errors
Failed requests for a given key request measured on server side. To learn about failure detection, see Configure service failure detection.
Failure rate (server side errors)
Number of calls without server side errors
Client side response time
Response time for a given key request - measured on the client side. This metric is written for each key request. To learn more about key requests, see Monitor key request.
Server side response time
Response time for a given key request - measured on the server side. This metric is written for each request. To learn more about key requests, see Monitor key request.
Key request response time
Response time for a given key request. This metric is written for each key request. To learn more about key requests, see Monitor key request.
Success rate (server side)
Number of calls to databases
Time spent in database calls
IO time
Lock time
Number of calls to other services
Time spent in calls to other services
Total processing time
Total processing time for a given key request. This time includes potential further asynchronous processing. This metric is written for each key request. To learn more about key requests, see Monitor key request.
Wait time
Unified service mesh request count
Number of service mesh requests received by a given service. To learn how Dynatrace detects services, see Service detection and naming.
Unified service mesh request count (by service)
Number of service mesh requests received by a given service. Reduced dimensions for faster charting. To learn how Dynatrace detects services, see Service detection and naming.
Unified service mesh request failure count
Number of failed service mesh requests received by a given service. To learn how Dynatrace detects service failures, see Configure service failure detection.
Unified service mesh request failure count (by service)
Number of failed service mesh requests received by a given service. Reduced dimensions for faster charting. To learn how Dynatrace detects service failures, see Configure service failure detection.
Unified service mesh request response time
Response time of a service mesh ingress measured in microseconds. To learn how Dynatrace calculates service timings, see Service analysis timings.
Unified service mesh request response time (by service)
Response time of a service mesh ingress measured in microseconds. Reduced dimensions for faster charting. To learn how Dynatrace calculates service timings, see Service analysis timings.
Unified service request count (by service, endpoint)
Number of requests received by a given service. Reduced dimensions for faster charting. To learn how Dynatrace detects and analyzes services, see Services.
Unified service request count (by service)
Number of requests received by a given service. Reduced dimensions for faster charting. To learn how Dynatrace detects and analyzes services, see Services.
Unified service failure count
Number of failed requests received by a given service. To learn how Dynatrace detects and analyzes services, see Services.
Unified service failure count (by service, endpoint)
Number of failed requests received by a given service. Reduced dimensions for faster charting. To learn how Dynatrace detects and analyzes services, see Services.
Unified service failure count (by service)
Number of failed requests received by a given service. Reduced dimensions for faster charting. To learn how Dynatrace detects and analyzes services, see Services.
Unified service request response time (by service, endpoint)
Response time of a service measured in microseconds on the server side. Response time is the time until a response is sent to a calling application, process or other service. It does not include further asynchronous processing. Reduced dimensions for faster charting. To learn how Dynatrace calculates service timings, see Service analysis timings.
Unified service request response time (by service)
Response time of a service measured in microseconds on the server side. Response time is the time until a response is sent to a calling application, process or other service. It does not include further asynchronous processing. Reduced dimensions for faster charting. To learn how Dynatrace calculates service timings, see Service analysis timings.
Request count - client
Number of requests received by a given service - measured on the client side. This metric allows service splittings. To learn how Dynatrace detects and analyzes services, see Services.
Request count - server
Number of requests received by a given service - measured on the server side. This metric allows service splittings. To learn how Dynatrace detects and analyzes services, see Services.
Request count
Number of requests received by a given service. This metric allows service splittings. To learn how Dynatrace detects and analyzes services, see Services.
Client side response time
Response time for a given key request per request type - measured on the client side. This metric is written for each key request. To learn more about key requests, see Monitor key request.
Server side response time
Response time for a given key request per request type - measured on the server side. This metric is written for each key request. To learn more about key requests, see Monitor key request.
Client side response time
Server side response time
Response time
Time consumed by a particular service until a response is sent back to the calling application, process, service etc.To learn how Dynatrace calculates service timings, see Service analysis timings.
Success rate (server side)
Total processing time
Total time consumed by a particular request type including asynchronous processing. Time includes the factor that asynchronous processing can still occur after responses are sent. To learn how Dynatrace calculates service timings, see Service analysis timings.
Total processing time
Total time consumed by a particular service including asynchronous processing. Time includes the factor that asynchronous processing can still occur after responses are sent.To learn how Dynatrace calculates service timings, see Service analysis timings.
Number of calls to databases
Time spent in database calls
IO time
Lock time
Number of calls to other services
Time spent in calls to other services
Wait time
Action duration - custom action [browser monitor]
The duration of custom actions; split by monitor.
Action duration - custom action (by geolocation) [browser monitor]
The duration of custom actions; split by monitor, geolocation.
Action duration - load action [browser monitor]
The duration of load actions; split by monitor.
Action duration - load action (by geolocation) [browser monitor]
The duration of load actions; split by monitor, geolocation.
Action duration - XHR action [browser monitor]
The duration of XHR actions; split by monitor.
Action duration - XHR action (by geolocation) [browser monitor]
The duration of XHR actions; split by monitor, geolocation.
Availability rate (by location) [browser monitor]
The availability rate of browser monitors.
Availability rate - excl. maintenance windows (by location) [browser monitor]
The availability rate of browser monitors excluding maintenance windows.
Cumulative layout shift - load action [browser monitor]
The score measuring the unexpected shifting of visible webpage elements. Calculated for load actions; split by monitor.
Cumulative layout shift - load action (by geolocation) [browser monitor]
The score measuring the unexpected shifting of visible webpage elements. Calculated for load actions; split by monitor, geolocation.
DOM interactive - load action [browser monitor]
The time taken until a page's status is set to "interactive" and it's ready to receive input. Calculated for load actions; split by monitor
DOM interactive - load action (by geolocation) [browser monitor]
The time taken until a page's status is set to "interactive" and it's ready to receive input. Calculated for load actions; split by monitor, geolocation
Error details (by error code) [browser monitor]
The number of detected errors; split by monitor, error code.
Error details (by geolocation, error code) [browser monitor]
The number of detected errors; split by monitor executions.
Action duration - custom action (by event) [browser monitor]
The duration of custom actions; split by event.
Action duration - custom action (by event, geolocation) [browser monitor]
The duration of custom actions; split by event, geolocation.
Action duration - load action (by event) [browser monitor]
The duration of load actions; split by event.
Action duration - load action (by event, geolocation) [browser monitor]
The duration of load actions; split by event, geolocation.
Action duration - XHR action (by event) [browser monitor]
The duration of XHR actions; split by event.
Action duration - XHR action (by event, geolocation) [browser monitor]
The duration of XHR actions; split by event, geolocation.
Cumulative layout shift - load action (by event) [browser monitor]
The score measuring the unexpected shifting of visible webpage elements. Calculated for load actions; split by event.
Cumulative layout shift - load action (by event, geolocation) [browser monitor]
The score measuring the unexpected shifting of visible webpage elements. Calculated for load actions; split by event, geolocation.
DOM interactive - load action (by event) [browser monitor]
The time taken until a page's status is set to "interactive" and it's ready to receive input. Calculated for load actions; split by event
DOM interactive - load action (by event, geolocation) [browser monitor]
The time taken until a page's status is set to "interactive" and it's ready to receive input. Calculated for load actions; split by event, geolocation
Error details (by event, error code) [browser monitor]
The number of detected errors; split by event, error code.
Error details (by event, geolocation, error code) [browser monitor]
The number of detected errors; split by event, geolocation, error code.
Failed events count (by event) [browser monitor]
The number of failed monitor events; split by event.
Failed events count (by event, geolocation) [browser monitor]
The number of failed monitor events; split by event, geolocation.
Time to first byte - load action (by event) [browser monitor]
The time taken until the first byte of the response is received from the server, relevant application caches, or a local resource. Calculated for load actions; split by event.
Time to first byte - load action (by event, geolocation) [browser monitor]
The time taken until the first byte of the response is received from the server, relevant application caches, or a local resource. Calculated for load actions; split by event, geolocation.
Time to first byte - XHR action (by event) [browser monitor]
The time taken until the first byte of the response is received from the server, relevant application caches, or a local resource. Calculated for XHR actions; split by event.
Time to first byte - XHR action (by event, geolocation) [browser monitor]
The time taken until the first byte of the response is received from the server, relevant application caches, or a local resource. Calculated for XHR actions; split by event, geolocation.
Largest contentful paint - load action (by event) [browser monitor]
The time taken to render the largest element in the viewport. Calculated for load actions; split by event.
Largest contentful paint - load action (by event, geolocation) [browser monitor]
The time taken to render the largest element in the viewport. Calculated for load actions; split by event, geolocation.
Load event end - load action (by event) [browser monitor]
The time taken to complete the load event of a page. Calculated for load actions; split by event.
Load event end - load action (by event, geolocation) [browser monitor]
The time taken to complete the load event of a page. Calculated for load actions; split by event, geolocation.
Load event start - load action (by event) [browser monitor]
The time taken to begin the load event of a page. Calculated for load actions; split by event.
Load event start - load action (by event, geolocation) [browser monitor]
The time taken to begin the load event of a page. Calculated for load actions; split by event, geolocation.
Network contribution - load action (by event) [browser monitor]
The time taken to request and receive resources (including DNS lookup, redirect, and TCP connect time). Calculated for load actions; split by event.
Network contribution - load action (by event, geolocation) [browser monitor]
The time taken to request and receive resources (including DNS lookup, redirect, and TCP connect time). Calculated for load actions; split by event, geolocation.
Network contribution - XHR action (by event) [browser monitor]
The time taken to request and receive resources (including DNS lookup, redirect, and TCP connect time). Calculated for XHR actions; split by event.
Network contribution - XHR action (by event, geolocation) [browser monitor]
The time taken to request and receive resources (including DNS lookup, redirect, and TCP connect time). Calculated for XHR actions; split by event, geolocation.
Response end - load action (by event) [browser monitor]
(AKA HTML downloaded) The time taken until the user agent receives the last byte of the response or the transport connection is closed, whichever comes first. Calculated for load actions; split by event.
Response end - load action (by event, geolocation) [browser monitor]
(AKA HTML downloaded) The time taken until the user agent receives the last byte of the response or the transport connection is closed, whichever comes first. Calculated for load actions; split by event, geolocation.
Response end - XHR action (by event) [browser monitor]
(AKA HTML downloaded) The time taken until the user agent receives the last byte of the response or the transport connection is closed, whichever comes first. Calculated for XHR actions; split by event.
Response end - XHR action (by event, geolocation) [browser monitor]
(AKA HTML downloaded) The time taken until the user agent receives the last byte of the response or the transport connection is closed, whichever comes first. Calculated for XHR actions; split by event, geolocation.
Server contribution - load action (by event) [browser monitor]
The time spent on server-side processing for a page. Calculated for load actions; split by event.
Server contribution - load action (by event, geolocation) [browser monitor]
The time spent on server-side processing for a page. Calculated for load actions; split by event, geolocation.
Server contribution - XHR action (by event) [browser monitor]
The time spent on server-side processing for a page. Calculated for XHR actions; split by event.
Server contribution - XHR action (by event, geolocation) [browser monitor]
The time spent on server-side processing for a page. Calculated for XHR actions; split by event, geolocation.
Speed index - load action (by event) [browser monitor]
The score measuring how quickly the visible parts of a page are rendered. Calculated for load actions; split by event.
Speed index - load action (by event, geolocation) [browser monitor]
The score measuring how quickly the visible parts of a page are rendered. Calculated for load actions; split by event, geolocation.
Successful events count (by event) [browser monitor]
The number of successful monitor events; split by event.
Successful events count (by event, geolocation) [browser monitor]
The number of successful monitor events; split by event, geolocation.
Total events count (by event) [browser monitor]
The total number of monitor events executions executions; split by event.
Total events count (by event, geolocation) [browser monitor]
The total number of monitor events executions; split by event, geolocation.
Total duration (by event) [browser monitor]
The duration of all actions in an event; split by event.
Total duration (by event, geolocation) [browser monitor]
The duration of all actions in an event; split by event, geolocation.
Visually complete - load action (by event) [browser monitor]
The time taken to fully render content in the viewport. Calculated for load actions; split by event.
Visually complete - load action (by event, geolocation) [browser monitor]
The time taken to fully render content in the viewport. Calculated for load actions; split by event, geolocation.
Visually complete - XHR action (by event) [browser monitor]
The time taken to fully render content in the viewport. Calculated for XHR actions; split by event.
Visually complete - XHR action (by event, geolocation) [browser monitor]
The time taken to fully render content in the viewport. Calculated for XHR actions; split by event, geolocation.
Failed executions count [browser monitor]
The number of failed monitor executions; split by monitor.
Failed executions count (by geolocation) [browser monitor]
The number of failed monitor executions; split by monitor, geolocation.
Time to first byte - load action [browser monitor]
The time taken until the first byte of the response is received from the server, relevant application caches, or a local resource. Calculated for load actions; split by monitor.
Time to first byte - load action (by geolocation) [browser monitor]
The time taken until the first byte of the response is received from the server, relevant application caches, or a local resource. Calculated for load actions; split by monitor, geolocation.
Time to first byte - XHR action [browser monitor]
The time taken until the first byte of the response is received from the server, relevant application caches, or a local resource. Calculated for XHR actions; split by monitor.
Time to first byte - XHR action (by geolocation) [browser monitor]
The time taken until the first byte of the response is received from the server, relevant application caches, or a local resource. Calculated for XHR actions; split by monitor, geolocation.
Largest contentful paint - load action [browser monitor]
The time taken to render the largest element in the viewport. Calculated for load actions; split by monitor.
Largest contentful paint - load action (by geolocation) [browser monitor]
The time taken to render the largest element in the viewport. Calculated for load actions; split by monitor, geolocation.
Load event end - load action [browser monitor]
The time taken to complete the load event of a page. Calculated for load actions; split by monitor.
Load event end - load action (by geolocation) [browser monitor]
The time taken to complete the load event of a page. Calculated for load actions; split by monitor, geolocation.
Load event start - load action [browser monitor]
The time taken to begin the load event of a page. Calculated for load actions; split by monitor.
Load event start - load action (by geolocation) [browser monitor]
The time taken to begin the load event of a page. Calculated for load actions; split by monitor, geolocation.
Network contribution - load action [browser monitor]
The time taken to request and receive resources (including DNS lookup, redirect, and TCP connect time). Calculated for load actions; split by monitor.
Network contribution - load action (by geolocation) [browser monitor]
The time taken to request and receive resources (including DNS lookup, redirect, and TCP connect time). Calculated for load actions; split by monitor, geolocation.
Network contribution - XHR action [browser monitor]
The time taken to request and receive resources (including DNS lookup, redirect, and TCP connect time). Calculated for XHR actions; split by monitor.
Network contribution - XHR action (by geolocation) [browser monitor]
The time taken to request and receive resources (including DNS lookup, redirect, and TCP connect time). Calculated for XHR actions; split by monitor, geolocation.
Response end - load action [browser monitor]
(AKA HTML downloaded) The time taken until the user agent receives the last byte of the response or the transport connection is closed, whichever comes first. Calculated for load actions; split by monitor.
Response end - load action (by geolocation) [browser monitor]
(AKA HTML downloaded) The time taken until the user agent receives the last byte of the response or the transport connection is closed, whichever comes first. Calculated for load actions; split by monitor, geolocation.
Response end - XHR action [browser monitor]
(AKA HTML downloaded) The time taken until the user agent receives the last byte of the response or the transport connection is closed, whichever comes first. Calculated for XHR actions; split by monitor.
Response end - XHR action (by geolocation) [browser monitor]
(AKA HTML downloaded) The time taken until the user agent receives the last byte of the response or the transport connection is closed, whichever comes first. Calculated for XHR actions; split by monitor, geolocation.
Server contribution - load action [browser monitor]
The time spent on server-side processing for a page. Calculated for load actions; split by monitor.
Server contribution - load action (by geolocation) [browser monitor]
The time spent on server-side processing for a page. Calculated for load actions; split by monitor, geolocation.
Server contribution - XHR action [browser monitor]
The time spent on server-side processing for a page. Calculated for XHR actions; split by monitor.
Server contribution - XHR action (by geolocation) [browser monitor]
The time spent on server-side processing for a page. Calculated for XHR actions; split by monitor, geolocation.
Speed index - load action [browser monitor]
The score measuring how quickly the visible parts of a page are rendered. Calculated for load actions; split by monitor.
Speed index - load action (by geolocation) [browser monitor]
The score measuring how quickly the visible parts of a page are rendered. Calculated for load actions; split by monitor, geolocation.
Successful executions count [browser monitor]
The number of successful monitor executions; split by monitor.
Successful executions count (by geolocation) [browser monitor]
The number of successful monitor executions; split by monitor, geolocation.
Total executions count [browser monitor]
The total number of monitor executions executions; split by monitor.
Total executions count (by geolocation) [browser monitor]
The total number of monitor executions executions; split by monitor, geolocation.
Total duration [browser monitor]
The duration of all actions in an event; split by monitor.
Total duration (by geolocation) [browser monitor]
The duration of all actions in an event; split by monitor, geolocation.
Visually complete - load action [browser monitor]
The time taken to fully render content in the viewport. Calculated for load actions; split by monitor.
Visually complete - load action (by geolocation) [browser monitor]
The time taken to fully render content in the viewport. Calculated for load actions; split by monitor, geolocation.
Visually complete - XHR action [browser monitor]
The time taken to fully render content in the viewport. Calculated for XHR actions; split by monitor.
Visually complete - XHR action (by geolocation) [browser monitor]
The time taken to fully render content in the viewport. Calculated for XHR actions; split by monitor, geolocation.
Availability rate (by location) [HTTP monitor]
The availability rate of HTTP monitors.
Availability rate - excl. maintenance windows (by location) [HTTP monitor]
The availability rate of HTTP monitors excluding maintenance windows.
DNS lookup time (by location) [HTTP monitor]
The time taken to resolve the hostname for a target URL for the sum of all requests.
Duration (by location) [HTTP monitor]
The duration of the sum of all requests.
Execution count (by status) [HTTP monitor]
The number of monitor executions.
DNS lookup time (by request, location) [HTTP monitor]
The time taken to resolve the hostname for a target URL for individual HTTP requests.
Duration (by request, location) [HTTP monitor]
The duration of individual HTTP requests.
Response size (by request, location) [HTTP monitor]
The response size of individual HTTP requests.
TCP connect time (by request, location) [HTTP monitor]
The time taken to establish the TCP connection to the server (including SSL) for individual HTTP requests.
Time to first byte (by request, location) [HTTP monitor]
The time taken until the first byte of the response is received from the server, relevant application caches, or a local resource. Calculated for individual HTTP requests.
TLS handshake time (by request, location) [HTTP monitor]
The time taken to complete the TLS handshake for individual HTTP requests.
Duration threshold (request) (by request) [HTTP monitor]
The performance threshold for individual HTTP requests.
Result status count (by request, location) [HTTP monitor]
The number of request executions with success/failure result status.
Status code count (by request, location) [HTTP monitor]
The number of request executions that end with an HTTP status code.
Response size (by location) [HTTP monitor]
The response size of the sum of all requests.
TCP connect time (by location) [HTTP monitor]
The time taken to establish the TCP connection to the server (including SSL) for the sum of all requests.
Time to first byte (by location) [HTTP monitor]
The time taken until the first byte of the response is received from the server, relevant application caches, or a local resource. Calculated for the sum of all requests.
TLS handshake time (by location) [HTTP monitor]
The time taken to complete the TLS handshake for the sum of all requests.
Duration threshold [HTTP monitor]
The performance threshold for the sum of all requests.
Result status count (by location) [HTTP monitor]
The number of monitor executions with success/failure result status.
Status code count (by location) [HTTP monitor]
The number of monitor executions that end with an HTTP status code.
Node health status count [synthetic]
The number of private Synthetic nodes and their health status.
Private location health status count [synthetic]
The number of private Synthetic locations and their health status.
Monitor availability [Network Availability monitor]
Monitor availability excluding maintenance windows [Network Availability monitor]
DNS request resolution time [Network Availability request]
Number of successful ICMP packets [Network Availability request]
Number of ICMP packets [Network Availability request]
ICMP request execution time [Network Availability request]
ICMP round trip time [Network Availability request]
ICMP request success rate [Network Availability request]
Request availability [Network Availability request]
Request availability excluding maintenance windows [Network Availability request]
Request execution time [Network Availability request]
Execution count (by status) [Network Availability request]
Step availability [Network Availability step]
Step availability excluding maintenance windows [Network Availability step]
Step execution time [Network Availability step]
Execution count (by status) [Network Availability step]
Step success rate [Network Availability step]
TCP request connection time [Network Availability request]
Monitor execution time [Network Availability monitor]
Execution count (by status) [Network Availability monitor]
Monitor success rate [Network Availability monitor]
Availability rate (by location) [third-party monitor]
The availability rate of third-party monitors.
Availability rate - excl. maintenance windows (by location) [third-party monitor]
The availability rate of third-party monitors excluding maintenance windows.
Error count [third-party monitor]
The number of detected errors; split by monitor, step, error code.
Error count (by location) [third-party monitor]
The number of detected errors; split by monitor, location, step, error code.
Test quality rate [third-party monitor]
The test quality rate. Calculated by dividing successful steps by the total number of steps executed; split by monitor.
Test quality rate (by location) [third-party monitor]
The test quality rate. Calculated by dividing successful steps by the total number of steps executed; split by monitor, location.
Response time [third-party monitor]
The response time of third-party monitors; split by monitor.
Response time (by location) [third-party monitor]
The response time of third-party monitors; split by monitor, location.
Response time (by step) [third-party monitor]
The response time of third-party monitors; split by step.
Response time (by step, location) [third-party monitor]
The response time of third-party monitors; split by step, location.
.NET garbage collection (# Gen 0)
Number of completed GC runs that collected objects in Gen0 Heap within the given time range, https://dt-url.net/i1038bq
.NET garbage collection (# Gen 1)
Number of completed GC runs that collected objects in Gen1 Heap within the given time range, https://dt-url.net/i1038bq
.NET garbage collection (# Gen 2)
Number of completed GC runs that collected objects in Gen2 Heap within the given time range, https://dt-url.net/i1038bq
.NET % time in GC
Percentage time spend within garbage collection
.NET % time in JIT
.NET % time in Just in Time compilation
.NET average number of active threads
.NET memory consumption (Large Object Heap)
.NET memory consumption for objects within Large Object Heap, https://dt-url.net/es238z7
.NET memory consumption (heap size Gen 0)
.NET memory consumption for objects within heap Gen0, https://dt-url.net/i1038bq
.NET memory consumption (heap size Gen 1)
.NET memory consumption for objects within heap Gen1, https://dt-url.net/i1038bq
.NET memory consumption (heap size Gen 2)
.NET memory consumption for objects within heap Gen2, https://dt-url.net/i1038bq
Bytes in all heaps
Gen 0 Collections
Gen 1 Collections
Gen 2 Collections
Logical threads
Physical threads
Committed bytes
Reserved bytes
Time in GC
Contention rate
Queue length
Gen 0 Heap size
Gen 1 Heap size
Gen 2 Heap size
.NET managed thread pool active io completion threads
.NET managed thread pool active io completion threads
.NET managed thread pool queued work items
.NET managed thread pool queued work items
.NET managed thread pool active worker threads
.NET managed thread pool active worker threads
Blocks number
Cache capacity
Cache used
Remaining capacity
Total capacity
Used capacity
Capacity used non DFS
Corrupted blocks
Estimated capacity total loses
Appended files
Created files
Deleted files
Renamed files
Files number
Dead DataNodes
Dead decommissioning DataNodes
Live decommissioning DataNodes
Number of decommissioning DataNodes
Live DataNodes
Number of stale dataNodes
Number of missing blocks
Pending deletion blocks
Pending replication blocks
Scheduled replication blocks
Total load
Under replicated blocks
Volume failures total
Allocated containers
Allocated memory
Allocated CPU in virtual cores
Completed applications
Failed applications
Killed applications
Pending applications
Running applications
Submitted applications
Available memory
Available CPU in virtual cores
Active NodeManagers
Decommissioned NodeManagers
Lost NodeManagers
Rebooted NodeManagers
Unhealthy NodeManagers
Pending memory requests
Pending CPU in virtual cores requests
Reserved memory
Reserved CPU in virtual cores requests
Max active
Max active (global)
Max total
Max total (global)
Num active
Num active (global)
Num idle
Num idle (global)
Num waiters
Num waiters (global)
Wait count
Wait count (global)
Tomcat received bytes / sec
Tomcat sent bytes / sec
Tomcat busy threads
Tomcat idle threads
Tomcat request count / sec
cluster basicStats diskFetches
cluster count membase
cluster count memcached
cluster samples cmd_get
cluster samples cmd_set
cluster samples curr_items
cluster samples ep_cache_miss_rate
cluster samples ep_num_value_ejects
cluster samples ep_oom_errors
cluster samples ep_tmp_oom_errors
cluster samples ops
cluster samples swap_used
cluster status healthy
cluster status unhealthy
cluster status warmup
cluster storageTotals hdd free
cluster storageTotals hdd quotaTotal
cluster storageTotals hdd total
cluster storageTotals hdd used
cluster storageTotals hdd usedByData
cluster storageTotals ram percentageUsage
cluster storageTotals ram quotaTotal
cluster storageTotals ram quotaTotalPerNode
cluster storageTotals ram quotaUsed
cluster storageTotals ram quotaUsedPerNode
cluster storageTotals ram total
cluster storageTotals ram used
cluster storageTotals ram usedByData
liveview basicStats diskFetches
liveview basicStats diskUsed
liveview basicStats memUsed
liveview samples cmd_get
liveview samples cmd_set
liveview samples couch_docs_data_size
liveview samples couch_total_disk_size
liveview samples disk_write_queue
liveview samples ep_cache_miss_rate
liveview samples ep_mem_high_wat
liveview samples ep_num_value_ejects
liveview samples ops
Custom Device Count
Documents count
Deleted documents
Field data evictions
Field data size
Query cache count
Query cache size
Query cache evictions
Segment count
Replica shards
Indices count
Active primary shards
Active shards
Delayed unassigned shards
Initializing shards
Number of data nodes
Number of nodes
Relocating shards
Status green
Status red
Status unknown
Status yellow
Unassigned shards
Process group total CPU time during GC suspensions
This metric provides statistics about CPU usage for process groups of garbage-collected technologies. The metric value is the sum of CPU time used during garbage collector suspensions for every process (including its workers) in a process group. It has a "Process Group" dimension.
Process group total CPU time
This metric provides the total CPU time used by a process group. The metric value is the sum of CPU time every process (including its workers) of the process group uses. The result is expressed in microseconds. It can help to identify the most CPU-intensive technologies in the monitored environment. It has a "Process Group" dimension.
Process total CPU time during GC suspensions
This metric provides statistics about CPU usage for garbage-collected processes. The metric value is the sum of CPU time used during garbage collector suspensions for all process workers. It has a "Process" dimension (dt.entity.process_group_instance).
Process total CPU time
This metric provides the CPU time used by a process. The metric value is the sum of CPU time every process worker uses. The result is expressed in microseconds. It has a "Process" dimension (dt.entity.process_group_instance).
Process CPU usage
This metric provides the percentage of the CPU usage of a process. The metric value is the sum of CPU time every process worker uses divided by the total available CPU time. The result is expressed in percentage. A value of 100% indicates that the process uses all available CPU resources of the host. It has a "Process" dimension (dt.entity.process_group_instance).
z/OS General CPU time
The time spent on the general-purpose central processor (GCP) after process start per minute
z/OS General CPU usage
The percent of the general-purpose central processor (GCP) used
Process file descriptors max
This metric provides statistics about the file descriptor resource limits. It is supported on Linux. The metric value is the total limit of file descriptors that all process workers can open. It is sent once per minute with a 10-second granularity - six samples are aggregated every minute. It has a "Process" dimension (dt.entity.process_group_instance).
Process file descriptors used per PID
This metric provides the file descriptor usage statistics. It is supported on Linux. The metric value is the highest percentage of the currently used file descriptor limit among process workers. It is sent once per minute with a 10-second granularity - six samples are aggregated every minute. It offers two dimensions: "Process" (dt.entity.process_group_instance) and pid dimension corresponding to the PID with the highest percentage of available descriptors usage.
Process file descriptors used
This metric provides statistics about file descriptor usage. It is supported on Linux. The metric value is the total number of file descriptors all process workers have opened. You can use it to detect processes that may cause the system to reach the limit of open file descriptors.It has a "Process" dimension (dt.entity.process_group_instance).
Process I/O read bytes
This metric provides statistics about the I/O read operations of a process. The metric value is a sum of I/O bytes read from the storage layer by all process workers per second. High values help to identify bottlenecks reducing process performance caused by the slow read speed of the storage device. It has a "Process" dimension (dt.entity.process_group_instance).
Process I/O bytes total
This metric provides statistics about I/O operations for a process. The metric value is a sum of I/O bytes read and written by all process workers per second. It has a "Process" dimension (dt.entity.process_group_instance).
Process I/O write bytes
This metric provides statistics about the I/O write operations of a process. The metric value is a sum of I/O bytes written to the storage layer by all process workers per second. High values help to identify bottlenecks reducing process performance caused by the slow write speed of the storage device. It has a "Process" dimension (dt.entity.process_group_instance).
Process I/O requested read bytes
This metric provides the statistics about the I/O read operations a process requests. It is supported only on Linux and AIX. The metric value is a sum of I/O bytes requested to be read from the storage by worker processes per second. It includes additional read operations, such as terminal I/O. It does not indicate the actual disk I/O operations, as some parts of the read operation might have been satisfied from the page cache. This metric has a "Process" dimension (dt.entity.process_group_instance).
Process I/O requested write bytes
This metric provides the statistics about the I/O write operations a process requests. It is supported on Linux and AIX. The metric value is a sum of I/O bytes requested to be written to the storage by PGI processes per second. It includes additional write operations, such as terminal I/O. It does not indicate the actual disk I/O operations, as some parts of the write operation might have been satisfied from the page cache. This metric has a "Process" dimension (dt.entity.process_group_instance).
Process resource exhausted memory counter
This metric provides the counter of "Memory resource exhausted" events for a process. The metric value is the number of events all process workers generated in a minute. JVM generates the memory resource exhausted events when it is out of memory. This metric helps to identify Java processes with excessive memory usage. It has a "Process" dimension (dt.entity.process_group_instance).
Process page faults counter
This metric provides the rate of page faults of a process. The metric value is the sum of page faults per time unit of every process worker. A page fault occurs when the process attempts to access a memory block not stored in the RAM. It means that the block has to be identified in the virtual memory and then loaded from the storage. The lower values are better. A high number of page faults may indicate reduced performance due to insufficient memory size. It has a "Process" dimension (dt.entity.process_group_instance).
Process memory usage
This metric provides the percentage of memory used by a process. It helps to identify processes with high memory resource consumption and memory leaks. The metric value is the sum of the memory used by every process worker divided by the total available memory in the host. The result is expressed in percentage. It has a "Process" dimension (dt.entity.process_group_instance).
Process memory
This metric provides the memory usage of a process. It helps to identify processes with high memory resource consumption and memory leaks. The metric value represents the sum of every process worker's used memory size (including shared memory). The result is expressed in bytes. It has a "Process" dimension (dt.entity.process_group_instance).
Retransmission base received per second on host
Retransmission base received
Retransmission base received per second
Retransmission base sent per second on host
Retransmission base sent
Retransmission base sent per second
Retransmitted packets received per second on host
Retransmitted packets received
Retransmitted packets received per second
Retransmitted packets sent per second on host
Retransmitted packets
Retransmitted packets sent per second
Packet retransmissions
Packet retransmissions
Incoming packet retransmissions
Incoming packet retransmissions
Outgoing packet retransmissions
Outgoing packet retransmissions
Packets received
Packets received per second
Packets sent
Packets sent per second
TCP connectivity
Percentage of successfully established TCP sessions
New session received per second on host
New session received
New session received per second
New session received
New session received per second on localhost
Session reset received per second on host
Session reset received
Session reset received per second
Session reset received
Session reset received per second on localhost
Session timeout received per second on host
Session timeout received
Session timeout received per second
Session timeout received
Session timeout received per second on localhost
Traffic
Traffic in
Incoming network traffic at PGI
Traffic out
Outgoing network traffic from PGI
Bytes received
Bytes received per second
Bytes sent
Bytes sent per second
Ack-round-trip time
Average latency between outgoing TCP data and ACK
Requests
Requests per second
Server responsiveness
Round-trip time
Average TCP session handshake RTT
Throughput
Used network bandwidth
Process count per process group
This metric provides the number of processes in a process group. It can tell how many instances of the technology are running in the monitored environment. It has a "Process Group" dimension.
Worker processes
This metric provides the number of process workers. Too few worker processes may lead to performance degradation, while too many may waste available resources. Configuration of workers should be suitable for average workload and be able to scale up with higher demand. It has a "Process" dimension (dt.entity.process_group_instance).
Process resource exhausted threads counter
This metric provides the counter of "Thread resource exhausted" events for a process. The metric value is the number of events all process workers generated in a minute. JVM generates the thread resource exhausted events when it cannot create a new thread. This metric helps to identify Java processes with excessive memory usage. It has a "Process" dimension (dt.entity.process_group_instance).
z/OS zIIP time
The time spent on the system z integrated information processor (zIIP) after process start per minute
z/OS zIIP eligible time
The zIIP eligible time spent on the general-purpose central processor (GCP) after process start per minute
Go: 502 responses
The number of responses that indicate invalid service responses produced by an application.
Go: Response latency
The average response time from the application to clients.
Go: 5xx responses
The number of responses that indicate repeatedly crashing apps or response issues from applications.
Go: Total requests
The number of all requests representing the overall traffic flow.
Go: Heap idle size
The amount of memory not assigned to the heap or stack. Idle memory can be returned to the operating system or retained by the Go runtime for later reassignment to the heap or stack.
Go: Heap live size
The amount of memory considered live by the Go garbage collector. This metric accumulates memory retained by the most recent garbage collector run and allocated since then.
Go: Heap allocated Go objects count
The number of Go objects allocated on the Go heap.
Go: Committed memory
The amount of memory committed to the Go runtime heap.
Go: Used memory
The amount of memory used by the Go runtime heap.
Go: Garbage collector invocation count
The number of Go garbage collector runs.
Go: Go to C language (cgo) call count
The number of Go to C language (cgo) calls.
Go: Go runtime system call count
The number of system calls executed by the Go runtime. This number doesn't include system calls performed by user code.
Go: Average number of active Goroutines
The average number of active Goroutines.
Go: Average number of inactive Goroutines
The average number of inactive Goroutines.
Go: Application Goroutine count
The number of Goroutines instantiated by the user application.
Go: System Goroutine count
The number of Goroutines instantiated by the Go runtime.
Go: Worker thread count
The number of operating system threads instantiated to execute Goroutines. Go doesn't terminate worker threads; it keeps them in a parked state for future reuse.
Go: Parked worker thread count
The number of worker threads parked by Go runtime. A parked worker thread doesn't consume CPU cycles until the Go runtime unparks the thread.
Go: Out-of-work worker thread count
The number of worker threads whose associated scheduling context has no more Goroutines to execute. When this happens, the worker thread attempts to steal Goroutines from another scheduling context or the global run queue. If the stealing fails, the worker thread parks itself after some time. This same mechanism applies to a high workload scenario. When an idle scheduling context exists, the Go runtime unparks a parked worker thread and associates it with the idle scheduling context. The unparked worker thread is now in the 'out of work' state and starts Goroutine stealing.
Go: Idle scheduling context count
The number of scheduling contexts that have no more Goroutines to execute and for which Goroutine acquisition from the global run queue or other scheduling contexts has failed.
Go: Global Goroutine run queue size
The number of Goroutines in the global run queue. Goroutines are placed in the global run queue if the worker thread used to execute a blocking system call can't acquire a scheduling context. Scheduling contexts periodically acquire Goroutines from the global run queue.
JVM loaded classes
The number of classes that are currently loaded in the Java virtual machine, https://dt-url.net/l2c34jw
JVM total number of loaded classes
The total number of classes that have been loaded since the Java virtual machine has started execution, https://dt-url.net/d0y347x
JVM unloaded classes
The total number of classes unloaded since the Java virtual machine has started execution, https://dt-url.net/d7g34bi
Garbage collection total activation count
The total number of collections that have occurred for all pools, https://dt-url.net/oz834vd
Garbage collection total collection time
The approximate accumulated collection elapsed time in milliseconds for all pools, https://dt-url.net/oz834vd
Garbage collection suspension time
Time spent in milliseconds between GC pause starts and GC pause ends, https://dt-url.net/zj434js
Garbage collection count
The total number of collections that have occurred in that pool, https://dt-url.net/z9034yg
Garbage collection time
The approximate accumulated collection elapsed time in milliseconds in that pool, https://dt-url.net/z9034yg
JVM heap memory pool committed bytes
The amount of memory (in bytes) that is guaranteed to be available for use by the Java virtual machine, https://dt-url.net/1j034o0
JVM heap memory max bytes
The maximum amount of memory (in bytes) that can be used for memory management, https://dt-url.net/1j034o0
JVM heap memory pool used bytes
The amount of memory currently used by the memory pool (in bytes), https://dt-url.net/1j034o0
JVM runtime free memory
An approximation to the total amount of memory currently available for future allocated objects, measured in bytes, https://dt-url.net/2mm34yx
JVM runtime max memory
The maximum amount of memory that the virtual machine will attempt to use, measured in bytes, https://dt-url.net/lzq34mm
JVM runtime total memory
The total amount of memory currently available for current and future objects, measured in bytes, https://dt-url.net/otu34eo
Process memory allocation bytes
Process memory allocation objects count
Process memory survived objects bytes
Process memory survived objects count
Alive workers
Alive workers
Master apps
Master apps
Processing time - count
Processing time - count
Processing time - mean
Processing time - mean
Processing time - one minute rate
Processing time - one minute rate
Active jobs
Active jobs
Total jobs
Total jobs
Failed stages
Failed stages
Running stages
Running stages
Waiting stages
Waiting stages
Waiting apps
Waiting apps
Master workers
Master workers
JVM average number of active threads
JVM average number of inactive threads
JVM thread count
The current number of live threads including both daemon and non-daemon threads, https://dt-url.net/s02346y
JVM total CPU time
Kafka broker - Leader election rate
Kafka broker - Unclean election rate
Kafka controller - Active cluster controllers
Kafka controller - Offline partitions
Kafka broker - Partitions
Kafka broker - Under replicated partitions
Bytes received
Bytes received
Bytes transmitted
Bytes transmitted
Retransmitted packets
Number of retransmitted packets
Packets received
Number of packets received
Packets transmitted
Number of packets transmitted
Retransmission
Percentage of retransmitted packets
Round trip time
Round trip time in milli seconds. Aggregates data from active sessions
Network traffic
Summary of incoming and outgoing network traffic in bits per second
Incoming traffic
Incoming network traffic in bits per second
Outgoing traffic
Outgoing network traffic in bits per second
Nginx Plus cache free space
Nginx Plus cache hit ratio
Nginx Plus cache hits
Nginx Plus cache misses
Nginx Plus cache used space
Active Nginx Plus server zones
Inactive Nginx Plus server zones
Nginx Plus server zone requests
Nginx Plus server zone traffic in
Nginx Plus server zone traffic out
Healthy Nginx Plus upstream servers
Nginx Plus upstream requests
Nginx Plus upstream traffic in
Nginx Plus upstream traffic out
Unhealthy Nginx Plus upstream servers
Node.js: Active handles
Average number of active handles in the event loop
Node.js: Event loop tick frequency
Average number of event loop iterations (per 10 seconds interval)
Node.js: Event loop latency
Average latency of expected event completion
Node.js: Work processed latency
Average latency of a work item being enqueued and callback being called
Node.js: Event loop tick duration
Average duration of an event loop iteration (tick)
Node.js: Event loop utilization
Event loop utilization represents the percentage of time the event loop has been active
Node.js: GC heap used
Total size of allocated V8 heap used by application data (post-GC memory snapshot)
Node.js: Process Resident Set Size (RSS)
Amount of space occupied in the main memory
Node.js: V8 heap total
Total size of allocated V8 heap
Node.js: V8 heap used
Total size of allocated V8 heap used by application data (periodic memory snapshot)
Node.js: Number of active threads
Average number of active Node.js worker threads
Background CPU usage
Foreground CPU usage
CPU idle
CPU other processes
Physical read bytes
Physical write bytes
Total wait time
Allocated PGA
PGA aggregate Limit
PGA aggregate target
PGA used for work areas
Shared pool free
Redo log space wait time
Redo size increase
Redo write time
Buffer cache hit
Sorts in memory
Time spent on connection management
Time spent on other activities
PL SQL exec elapsed time
SQL exec time
Time spent on SQL parsing
Active sessions
All sessions
User calls count
Application wait time
Cluster wait time
Concurrency wait time
CPU time
Elapsed time
User I/O wait time
Buffer gets
Direct writes
Disk reads
Executions
Parse calls
Rows processed
Total space
Used space
Number of wait events
Total wait time
Background CPU usage
Foreground CPU usage
CPU idle
CPU other processes
Physical read bytes
Physical write bytes
Total wait time
Allocated PGA
PGA aggregate Limit
PGA aggregate target
PGA used for work areas
Shared pool free
Redo log space wait time
Redo size increase
Redo write time
Time spent on connection management
Time spent on other activities
PL SQL exec elapsed time
SQL exec time
Time spent on SQL parsing
Active sessions
All sessions
User calls count
Application wait time
Cluster wait time
Concurrency wait time
CPU time
Elapsed time
User I/O wait time
Buffer gets
Direct writes
Disk reads
Executions
Parse calls
Rows processed
Total space
Used space
Number of wait events
Total wait time
Buffer cache hit
Sorts in memory
PHP GC collected count
PHP GC collection duration
PHP GC effectiveness
PHP OPCache JIT buffer free
PHP OPCache JIT buffer size
PHP OPCache free memory
PHP OPCache used memory
PHP OPCache wasted memory
PHP OPCache restarts due to lack of keys
PHP OPCache manual restarts
PHP OPCache restarts due to out of memory
PHP OPCache blocklist misses
PHP OPCache number of cached keys
PHP OPCache number of cached scripts
PHP OPCache hits
PHP OPCache max number of keys
PHP OPCache misses
PHP OPCache interned string buffer size
PHP OPCache number of interned strings
PHP OPCache interned string memory usage
PHP average number of active threads
PHP average number of inactive threads
Python GC collected items from gen 0
Python GC collected items from gen 1
Python GC collected items from gen 2
Python GC collections number in gen 0
Python GC collections number in gen 1
Python GC collections number in gen 2
Python GC time in gen 0
Python GC time in gen 1
Python GC time in gen 2
Python GC uncollectable items in gen 0
Python GC uncollectable items in gen 1
Python GC uncollectable items in gen 2
Number of memory blocks allocated by Python
Number of active Python threads
cluster channels
cluster connections
cluster consumers
cluster exchanges
cluster ack messages
cluster delivered and get messages
cluster published messages
cluster ready messages
cluster redelivered messages
cluster unroutable messages
cluster unacknowledged messages
cluster node failed
cluster node ok
cluster crashed queues
cluster queues down
cluster flow queues
cluster idle queues
cluster running queues
topn ack
topn consumers
topn deliver/get
topn ready messages
topn unacknowledged messages
topn publish
Cache hit ratio
Cache hits for passes
Cache hits
Cache misses
Cache passes
Backend connections
Backend connections failed
Backend connections reused
Sessions accepted
Sessions dropped
Sessions queued
Threads failed
Maximum number of threads
Minimum number of threads
Total number of threads
Requests
Traffic
Dropped connections
Number of dropped connections
Handled connections
Number of successfully finished and closed requests
Reading connections
Number of connections which are receiving data from the client
Socket backlog waiting time
Average time needed to queue and handle incoming connections
Waiting connections
Number of connections with no active requests
Writing connections
Number of connections which are sending data to the client
Active worker threads
Number of active worker threads
Idle worker threads
Number of idle worker threads
Maximum worker threads
Maximum number of worker threads
Requests
Number of requests
Traffic
Amount of data transferred
Free pool size
Percent used
Pool size
In use time
Wait time
Number of waiting threads
Live sessions
Active threads
Pool size
Number of requests
z/OS Consumed Service Units per minute
The calculated number of consumed Service Units per minute