Each Dynatrace-supported technology offers multiple "built-in" metrics. Built-in metrics are included in the product out of the box, in some cases as part of built-in extensions.
Metrics that are based on OneAgent or ActiveGate extensions (prefix ext:
) and calculated metrics (prefix calc:
) are custom metrics, not built-in metrics; DDU consumption for these metrics can vary widely depending on how you use Dynatrace.
The ext:
prefix is used by metrics from OneAgent extensions and ActiveGate extensions, and also by classic metrics for AWS integration.
Despite the naming similarities, AWS integration metrics are not based on extensions.
To view all the metrics available in your environment, use the GET metrics API call. We recommend the following query parameters:
pageSize=500
—to obtain the largest possible number of metrics in one response.fields=displayName,unit,aggregationTypes,dduBillable
—to obtain the same set of fields as you see in these tables.metricSelector=ext:*
—to obtain all metrics coming from extensions.metricSelector=calc:*
—to obtain all calculated metrics.The sections below describe inconsistencies or limitations identified for Dynatrace built-in metrics.
The Other applications metrics section contains metrics captured for mobile and custom applications. These metrics, which start with builtin:apps.other
, are captured without the indication whether it's a mobile or a custom application. However, the "billing" application metrics, which start with builtin:billing.apps
, are split for these application types:
Mobile apps:
builtin:billing.apps.mobile.sessionsWithoutReplayByApplication
builtin:billing.apps.mobile.sessionsWithReplayByApplication
builtin:billing.apps.mobile.userActionPropertiesByMobileApplication
Custom apps:
builtin:billing.apps.custom.sessionsWithoutReplayByApplication
builtin:billing.apps.custom.userActionPropertiesByDeviceApplication
The following "billing" metrics for session count are actually the sum of billed and unbilled user sessions.
builtin:billing.apps.custom.sessionsWithoutReplayByApplication
builtin:billing.apps.mobile.sessionsWithReplayByApplication
builtin:billing.apps.mobile.sessionsWithoutReplayByApplication
builtin:billing.apps.web.sessionsWithReplayByApplication
builtin:billing.apps.web.sessionsWithoutReplayByApplication
If you want to get only the number of billed sessions, set the Type filter to Billed.
Different measurement units are used for similar request duration metrics for mobile and custom apps.
builtin:apps.other.keyUserActions.requestDuration.os
is measured in microseconds while other request duration metrics (builtin:apps.other.requestTimes.osAndVersion
and builtin:apps.other.requestTimes.osAndProvider
) are measured in milliseconds.
Custom metrics are defined or installed by the user, while built-in metrics are by default part of the product. Certain built-in metrics are disabled by default and, if turned on, will consume DDUs.These metrics cover a wide range of supported technologies, including Apache Tomcat, NGINX, Couchbase, RabbitMQ, Cassandra, Jetty, and many others.
A custom metric is a new type of metric that offers a user-provided metric identifier and unit of measure. The semantics of custom metrics are defined by you and aren't included in the default OneAgent installation. Custom metrics are sent to Dynatrace through various interfaces. Following the definition of a custom metric, the metric can be reported for multiple monitored components. Each component’s custom metric results in a separate timeseries.
For example, if you define a new custom metric called Files count
that counts the newly created files within a directory, this new metric can be collected either for one host or for two individual hosts. Collecting the same metric for two individual hosts results in two timeseries of the same custom metric type, as shown in the example below:
For the purposes of calculating monitoring consumption, collecting the same custom metric for two hosts counts as two separate custom metrics.
Reported error count (by OS, app version) [custom]
The number of all reported errors.
Session count (by OS, app version) [custom]
The number of captured user sessions.
Session count (by OS, app version, crash replay feature status) [mobile]
The number of captured user sessions.
Session count (by OS, app version, full replay feature status) [mobile]
The number of captured user sessions.
Reported error count (by OS, app version) [mobile]
The number of all reported errors.
User action rate - affected by JavaScript errors (by key user action, user type) [web]
The percentage of key user actions with detected JavaScript errors.
Apdex (by key user action) [web]
The average Apdex rating for key user actions.
Action count - custom action (by key user action, browser) [web]
The number of custom actions that are marked as key user actions.
Action count - load action (by key user action, browser) [web]
The number of load actions that are marked as key user actions.
Action count - XHR action (by key user action, browser) [web]
The number of XHR actions that are marked as key user actions.
Cumulative Layout Shift - load action (by key user action, user type) [web]
The score measuring the unexpected shifting of visible webpage elements. Calculated for load actions that are marked as key user actions.
Apdex (by OS, geolocation) [mobile, custom]
The Apdex rating for all captured user actions.
Apdex (by OS, app version) [mobile, custom]
The Apdex rating for all captured user actions.
User count - estimated users affected by crashes (by OS) [mobile, custom]
The estimated number of unique users affected by a crash. For this high cardinality metric, the HyperLogLog algorithm is used to approximate the number of users.
User count - estimated users affected by crashes (by OS, app version) [mobile, custom]
The estimated number of unique users affected by a crash. For this high cardinality metric, the HyperLogLog algorithm is used to approximate the number of users.
User rate - estimated users affected by crashes (by OS) [mobile, custom]
The estimated percentage of unique users affected by a crash. For this high cardinality metric, the HyperLogLog algorithm is used to approximate the number of users.
Crash count (by OS, geolocation) [mobile, custom]
The number of detected crashes.
Session count - billed and unbilled [custom]
The number of billed and unbilled user sessions. To get only the number of billed sessions, set the "Type" filter to "Billed".
Total user action and session properties
The number of billed user action and user session properties.
Session count - billed and unbilled - with Session Replay [mobile]
The number of billed and unbilled user sessions that include Session Replay data. To get only the number of billed sessions, set the "Type" filter to "Billed".
Session count - billed and unbilled [mobile]
The total number of billed and unbilled user sessions (with and without Session Replay data). To get only the number of billed sessions, set the "Type" filter to "Billed".
Total user action and session properties
The number of billed user action and user session properties.
Session count - billed and unbilled - with Session Replay [web]
The number of billed and unbilled user sessions that include Session Replay data. To get only the number of billed sessions, set the "Type" filter to "Billed".
(DPS) Total Custom Events Classic billing usage
The number of custom events ingested aggregated over all monitored entities. Custom events include events sent to Dynatrace via the Events API or events created by a log event extraction rule. Use this total metric to query longer timeframes without losing precision or performance.
(DPS) Custom Events Classic billing usage by monitored entity
The number of custom events ingested split by monitored entity. Custom events include events sent to Dynatrace via the Events API or events created by a log event extraction rule. For details on the events billed, refer to the usage_by_event_info metric. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Custom Events Classic billing usage by event info
The number of custom events ingested split by event info. Custom events include events sent to Dynatrace via the Events API or events created by a log event extraction rule. The info contains the context of the event plus the configuration ID. For details on the related monitored entities, refer to the usage_by_entity metric. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Recorded metric data points per metric key
The number of reported metric data points split by metric key. This metric does not account for included metric data points available to your environment.
(DPS) Total billed metric data points
The total number of metric data points after deducting the included metric data points. This is the rate-card value used for billing. Use this total metric to query longer timeframes without losing precision or performance.
(DPS) Total metric data points billable for Foundation & Discovery hosts
The number of metric data points billable for Foundation & Discovery hosts.
(DPS) Total metric data points billed for Full-Stack hosts
The number of metric data points billed for Full-Stack hosts. To view the unadjusted usage per host, use builtin:billing.full_stack_monitoring.metric_data_points.ingested_by_host . This trailing metric is reported at 15-minute intervals with up to a 15-minute delay.
(DPS) Total metric data points billed for Infrastructure-monitored hosts
The number of metric data points billed for Infrastructure-monitored hosts. To view the unadjusted usage per host, use builtin:billing.infrastructure_monitoring.metric_data_points.ingested_by_host . This trailing metric is reported at 15-minute intervals with up to a 15-minute delay.
(DPS) Total metric data points billed by other entities
The number of metric data points billed that cannot be assigned to a host. The values reported in this metric are not eligible for included metric deduction and will be billed as is. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. o view the monitored entities that consume this usage, use the other_by_entity metric.
(DPS) Total Custom Traces Classic billing usage
The number of spans ingested aggregated over all monitored entities. A span is a single operation within a distributed trace, ingested into Dynatrace. Use this total metric to query longer timeframes without losing precision or performance.
(DPS) Custom Traces Classic billing usage by monitored entity
The number of spans ingested split by monitored entity. A span is a single operation within a distributed trace, ingested into Dynatrace. For details on span types, refer to the usage_by_span_type metric. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Custom Traces Classic billing usage by span type
The number of spans ingested split by span type. A span is a single operation within a distributed trace, ingested into Dynatrace. Span kinds can be CLIENT, SERVER, PRODUCER, CONSUMER or INTERNAL For details on the related monitored entities, refer to the usage_by_entity metric. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
DDU events consumption by event info
License consumption of Davis data units by events pool split by event info
DDU events consumption by monitored entity
License consumption of Davis data units by events pool split by monitored entity
Total DDU events consumption
Sum of license consumption of Davis data units aggregated over all monitored entities for the events pool
DDU log consumption by log path
License consumption of Davis data units by log pool split by log path
DDU log consumption by monitored entity
License consumption of Davis data units by log pool split by monitored entity
Total DDU log consumption
Sum of license consumption of Davis data units aggregated over all logs for the log pool
[Deprecated] (DPS) Business events usage - Ingest & Process
Business events Ingest & Process usage, tracked as bytes ingested within the hour. This trailing metric is reported hourly for the previous hour. Metric values are reported with up to a one-hour delay. [Deprecated] This metric is replaced by billing usage events.
[Deprecated] (DPS) Business events usage - Query
Business events Query usage, tracked as bytes read within the hour. This trailing metric is reported hourly for the previous hour. Metric values are reported with up to a one-hour delay. [Deprecated] This metric is replaced by billing usage events.
[Deprecated] (DPS) Business events usage - Retain
Business events Retain usage, tracked as total storage used within the hour, in bytes. This trailing metric is reported hourly for the previous hour. Metric values are reported with up to a one-hour delay. [Deprecated] This metric is replaced by billing usage events.
(DPS) Ingested metric data points for Foundation & Discovery
The number of metric data points aggregated over all Foundation & Discovery hosts.
(DPS) Ingested metric data points for Foundation & Discovery per host
The number of metric data points split by Foundation & Discovery hosts.
(DPS) Foundation & Discovery billing usage
The total number of host-hours being monitored by Foundation & Discovery, counted in 15 min intervals.
(DPS) Foundation & Discovery billing usage per host
The host-hours being monitored by Foundation & Discovery, counted in 15 min intervals.
(DPS) Available included metric data points for Full-Stack hosts
The total number of included metric data points that can be deducted from the metric data points reported by Full-Stack hosts. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. To view the number of applied included metric data points, use builtin:billing.full_stack_monitoring.metric_data_points.included_used . If the difference between this metric and the applied metrics is greater than 0, then more metrics can be ingested using Full-Stack Monitoring without incurring additional costs.
(DPS) Used included metric data points for Full-Stack hosts
The number of consumed included metric data points per host monitored with Full-Stack Monitoring. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. To view the number of potentially available included metrics, use builtin:billing.full_stack_monitoring.metric_data_points.included_used . If the difference between this metric and the available metrics is greater than zero, then that means that more metrics could be ingested on Full-Stack hosts without incurring additional costs.
(DPS) Total metric data points reported by Full-Stack hosts
The number of metric data points aggregated over all Full-Stack hosts. The values reported in this metric are eligible for included-metric-data-point deduction. Use this total metric to query longer timeframes without losing precision or performance. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. To view usage on a per-host basis, use builtin:billing.full_stack_monitoring.metric_data_points.ingested_by_host .
(DPS) Metric data points reported and split by Full-Stack hosts
The number of metric data points split by Full-Stack hosts. The values reported in this metric are eligible for included-metric-data-point deduction. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. The pool of available included metrics for a "15-minute interval" is visible via builtin:billing.full_stack_monitoring.metric_data_points.included . To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Full-Stack Monitoring billing usage
The total GiB memory of hosts being monitored in full-stack mode, counted in 15 min intervals. Use this total metric to query longer timeframes without losing precision or performance. For details on the hosts causing the usage, refer to the usage_per_host metric. For details on the containers causing the usage, refer to the usage_per_container metric.
(DPS) Full-stack usage by container type
The total GiB memory of containers being monitored in full-stack mode, counted in 15 min intervals.
(DPS) Available included metric data points for Infrastructure-monitored hosts
The total number of included metric data points that can be deducted from the metric data points reported by Infrastructure-monitored hosts. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. To view the number of applied included metric data points, use builtin:billing.infrastructure_monitoring.metric_data_points.included_used . If the difference between this metric and the applied metrics is greater than zero, then that means that more metrics could be ingested on Infrastructure-monitored hosts without incurring additional costs.
(DPS) Used included metric data points for Infrastructure-monitored hosts
The number of consumed included metric data points for Infrastructure-monitored hosts. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. To view the number of potentially available included metrics, use builtin:billing.infrastructure_monitoring.metric_data_points.included_used . If the difference between this metric and the available metrics is greater than zero, then that means that more metrics could be ingested on Infrastructure-monitored hosts without incurring additional costs.
(DPS) Total metric data points reported by Infrastructure-monitored hosts
The number of metric data points aggregated over all Infrastructure-monitored hosts. The values reported in this metric are eligible for included-metric-data-point deduction. Use this total metric to query longer timeframes without losing precision or performance. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. To view usage on a per-host basis, use the builtin:billing.full_stack_monitoring.metric_data_points.ingested_by_host .
(DPS) Metric data points reported and split by Infrastructure-monitored hosts
The number of metric data points split by Infrastructure-monitored hosts. The values reported in this metric are eligible for included-metric-data-point deduction. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. The pool of available included metrics for a "15-minute interval" is visible via builtin:billing.infrastructure_monitoring.metric_data_points.included . To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Infrastructure Monitoring billing usage
The total number of host-hours being monitored in infrastructure-only mode, counted in 15 min intervals. Use this total metric to query longer timeframes without losing precision or performance. For details on the hosts causing the usage, refer to the usage_per_host metric.
(DPS) Infrastructure Monitoring billing usage per host
The host-hours being monitored in infrastructure-only mode, counted in 15 min intervals. A host monitored for the whole hour has 4 data points with a value of 0.25, regardless of the memory size. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Kubernetes Platform Monitoring billing usage
The total number of monitored Kubernetes pods per hour, split by cluster and namespace and counted in 15 min intervals. A pod monitored for the whole hour has 4 data points with a value of 0.25.
(DPS) Log Management and Analytics usage - Ingest & Process
Log Management and Analytics Ingest & Process usage, tracked as bytes ingested within the hour. This trailing metric is reported hourly for the previous hour. Metric values are reported with up to a one-hour delay.
(DPS) Log Management and Analytics usage - Query
Log Management and Analytics Query usage, tracked as bytes read within the hour. This trailing metric is reported hourly for the previous hour. Metric values are reported with up to a one-hour delay.
(DPS) Log Management and Analytics usage - Retain
Log Management and Analytics Retain usage, tracked as total storage used within the hour, in bytes. This trailing metric is reported hourly for the previous hour. Metric values are reported with up to a one-hour delay.
(DPS) Total Log Monitoring Classic billing usage
The number of log records ingested aggregated over all monitored entities. A log record is recognized by either a timestamp or a JSON object. Use this total metric to query longer timeframes without losing precision or performance.
(DPS) Log Monitoring Classic billing usage by monitored entity
The number of log records ingested split by monitored entity. A log record is recognized by either a timestamp or a JSON object. For details on the log path, refer to the usage_by_log_path metric. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Log Monitoring Classic billing usage by log path
The number of log records ingested split by log path. A log record is recognized by either a timestamp or a JSON object. For details on the related monitored entities, refer to the usage_by_entity metric. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Mainframe Monitoring billing usage
The total number of MSU-hours being monitored, counted in 15 min intervals.
(DPS) Total Real-User Monitoring Property (mobile) billing usage
(Mobile) User action and session properties count. For details on how usage is calculated, refer to the documentation or builtin:billing.real_user_monitoring.web.property.usage_by_application . Use this total metric to query longer timeframes without losing precision or performance.
(DPS) Real-User Monitoring Property (mobile) billing usage by application
(Mobile) User action and session properties count by application. The billed value is calculated based on the number of sessions reported in builtin:billing.real_user_monitoring.mobile.session.usage_by_app + builtin:billing.real_user_monitoring.mobile.session_with_replay.usage_by_app . plus the number of configured properties that exceed the included number of properties (free of charge) offered for a given application. Data points are only written for billed sessions. If the value is 0, you have available metric data points. This trailing metric is reported hourly for the previous hour. Metric values are reported with up to a one-hour delay. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Total Real-User Monitoring (mobile) billing usage
(Mobile) Session count without Session Replay. The value billed for each session is the session duration measured in hours. So a 3-hour session results in a single data-point value of 3
. If two sessions end during the same minute, then the values are added together. Use this total metric to query longer timeframes without losing precision or performance. To view the application that consume this usage, refer to the usage_by_app metric.
(DPS) Real-User Monitoring (mobile) billing usage by application
(Mobile) Session count without Session Replay split by application The value billed for each session is the session duration measured in hours. So a 3-hour session results in a single data-point value of 3
. If two sessions of the same application end during the same minute, then the values are added together. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Total Real-User Monitoring (mobile) with Session Replay billing usage
(Mobile) Session count with Session Replay. The value billed for each session is the session duration measured in hours. So a 3-hour session results in a single data-point value of 3
. If two sessions end during the same minute, then the values are added together. Use this total metric to query longer timeframes without losing precision or performance. To view the application that consume this usage, refer to the usage_by_app metric.
(DPS) Real-User Monitoring (mobile) with Session Replay billing usage by application
(Mobile) Session count with Session Replay split by application. The value billed for each session is the session duration measured in hours. So a 3-hour session results in a single data-point value of 3
. If two sessions of the same application end during the same minute, then the values are added together. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Runtime Application Protection billing usage
Total GiB-memory of hosts protected with Runtime Application Protection (Application Security), counted at 15-minute intervals. Use this total metric to query longer timeframes without losing precision or performance. For details on the monitored hosts, refer to the usage_per_host metric.
(DPS) Runtime Application Protection billing usage per host
GiB-memory per host protected with Runtime Application Protection (Application Security), counted at 15-minute intervals. For example, a host with 8 GiB of RAM monitored for 1 hour has 4 data points with a value of 2
. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Runtime Vulnerability Analytics billing usage
Total GiB-memory of hosts protected with Runtime Vulnerability Analytics (Application Security), counted at 15-minute intervals. Use this total metric to query longer timeframes without losing precision or performance. For details on the monitored hosts, refer to the usage_per_host metric.
(DPS) Runtime Vulnerability Analytics billing usage per host
GiB-memory per hosts protected with Runtime Vulnerability Analytics (Application Security), counted at 15-minute intervals. For example, a host with 8 GiB of RAM monitored for 1 hour has 4 data points with a value of 2
. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Total Serverless Functions Classic billing usage
The number of invocations of the serverless function aggregated over all monitored entities. The term "function invocations" is equivalent to "function requests" or "function executions". Use this total metric to query longer timeframes without losing precision or performance.
(DPS) Serverless Functions Classic billing usage by monitored entity
The number of invocations of the serverless function split by monitored entity. The term "function invocations" is equivalent to "function requests" or "function executions". For details on which functions are invoked, refer to the usage_by_function metric. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
(DPS) Serverless Functions Classic billing usage by function
The number of invocations of the serverless function split by function. The term "function invocations" is equivalent to "function requests" or "function executions". For details on the related monitored entities, refer to the usage_by_entity metric. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
Actions
The number of billed actions consumed by browser monitors.
(DPS) Total Browser Monitor or Clickpath billing usage
The number of synthetic actions which triggers a web request that includes a page load, navigation event, or action that triggers an XHR or Fetch request. Scroll downs, keystrokes, or clicks that don't trigger web requests aren't counted as such actions. Use this total metric to query longer timeframes without losing precision or performance.
(DPS) Browser Monitor or Clickpath billing usage per synthetic browser monitor
The number of synthetic actions which triggers a web request that includes a page load, navigation event, or action that triggers an XHR or Fetch request. Scroll downs, keystrokes, or clicks that don't trigger web requests aren't counted as such actions. Actions are split by the Synthetic Browser Monitors that caused them. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
Third-party results
The number of billed results consumed by third-party monitors.
(DPS) Total Third-Party Synthetic API Ingestion billing usage
The number of synthetic test results pushed into Dynatrace with Synthetic 3rd party API. Use this total metric to query longer timeframes without losing precision or performance.
(DPS) Third-Party Synthetic API Ingestion billing usage per external browser monitor
The number of synthetic test results pushed into Dynatrace with Synthetic 3rd party API. The ingestions are split by external Synthetic Browser Monitors for which the results where ingested. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.
Number of running EC2 instances (AZ)
Number of starting VMs in region
Number of active VMs in region
Number of stopped VMs in region
Number of starting VMs in scale set
Number of active VMs in scale set
Number of stopped VMs in scale set
CF: Time to fetch cell states
The time that the auctioneer took to fetch state from all the cells when running its auction.
CF: App instance placement failures
The number of application instances that the auctioneer failed to place on Diego cells.
CF: App instance starts
The number of application instances that the auctioneer successfully placed on Diego cells.
CF: Task placement failures
The number of tasks that the auctioneer failed to place on Diego cells.
CF: 502 responses
The number of responses that indicate invalid service responses produced by an application.
CF: Response latency
The average response time from the application to clients.
CPU usage
Disk allocation
Disk capacity
Memory resident
Memory usage
Network incoming bytes rate
Host CPU usage %
Host disk usage rate
Host disk commands aborted
Host disk queue latency
Host disk read IOPS
Host disk read latency
Containers: CPU limit, mCores
CPU resource limit per container in millicores.
Containers: CPU logical cores
Number of logical CPU cores of the host.
Containers: CPU shares
Number of CPU shares allocated per container.
Containers: CPU throttling, mCores
CPU throttling per container in millicores.
Containers: CPU throttled time, ns/min
Total amount of time a container has been throttled, in nanoseconds per minute.
Containers: CPU usage, mCores
CPU usage per container in millicores
Containers: Memory cache, bytes
Page cache memory per container in bytes.
Containers: Memory limit, bytes
Memory limit per container in bytes. If no limit is set, this is an empty value.
Containers: Memory limit, % of physical memory
Percent memory limit per container relative to total physical memory. If no limit is set, this is an empty value.
Containers: Memory - out of memory kills
Number of out of memory kills for a container.
Containers: Memory - total physical memory, bytes
Total physical memory on the host in bytes.
Containers: Memory usage, bytes
Resident set size (Linux) or private working set size (Windows) per container in bytes.
Container bytes received
Container bytes transmitted
Container cpu usage
Devicemapper data space available
Devicemapper data space used
Devicemapper meta-data space available
Dashboard view count
Host availability
Host availability state metric reported in 1 minute intervals
z/OS General CPU usage
The percent of the general-purpose central processor (GCP) used
z/OS Rolling 4 hour MSU average
The 4h average of consumed million service units on this LPAR
z/OS MSU capacity
The over all capacity of million service units on this LPAR
z/OS zIIP eligible time
The zIIP eligible time spent on the general-purpose central processor (GCP) after process start per minute
AIX Entitlement configured
Capacity Entitlement is the number of virtual processors assigned to the AIX partition. It’s measured in fractions of processor equal to 0.1 or 0.01. For more information about entitlement, see Assigning the appropriate processor entitled capacity in official IMB documentation.
AIX Entitlement used
Percentage of entitlement used. Capacity Entitlement is the number of virtual cores assigned to the AIX partition. See For more information about entitlement, see Assigning the appropriate processor entitled capacity in official IMB documentation.
Number of DNS errors by type
The number of DNS errors by type
Number of orphaned DNS responses
The number of orphaned DNS responses on the host
Number of DNS queries
The number of DNS queries on the host
DNS query time sum
The time of all DNS queries on the host
DNS query time
The average time of DNS query. Calculated with DNS query time sum divided by number of DNS queries for each host and dns server pair.
DNS query time by DNS server
The weighted average time of DNS query by DNS server ip. Calculated with DNS query time sum divided by number of DNS queries. It weights the result taking into account number of requests from each host.
Disk throughput read
File system read throughput in bits per second
Disk throughput write
File system write throughput in bits per second
Disk available
Amount of free space available for user in file system. On Linux and AIX it is free space available for unprivileged user. It doesn't contain part of free space reserved for the root.
Disk read bytes per second
Speed of read from file system in bytes per second
Disk write bytes per second
Speed of write to file system in bytes per second
Disk available %
Percentage of free space available for user in file system. On Linux and AIX it is % of free space available for unprivileged user. It doesn't contain part of free space reserved for the root.
File descriptors max
Maximum amount of file descriptors for use
File descriptors used
Amount of file descriptors used
AIX Kernel threads blocked
Length of the swap queue. The swap queue contains the threads ready to run but swapped out with the currently running threads
AIX Kernel threads I/O event wait
Number of threads that are waiting for file system direct (cio) + Number of processes that are asleep waiting for buffered I/O
AIX Kernel threads I/O message wait
Number of threads that are sleeping and waiting for raw I/O operations at a particular time. Raw I/O operation allows applications to direct write to the Logical Volume Manager (LVM) layer
AIX Kernel threads runnable
Number of runnable threads (running or waiting for run time) (threads ready). The average number of runnable threads is seen in the first column of the vmstat command output
Memory available
The amount of memory (RAM) available on the host. The memory that is available for allocation to new or existing processes. Available memory is an estimation of how much memory is available for use without swapping.
Memory available %
The percentage of memory (RAM) available on the host. The memory that is available for allocation to new or existing processes. Available memory is an estimation of how much memory is available for use without swapping. Shows available memory as percentages.
Page faults per second
The measure of the number of page faults per second on the monitored host. This value includes soft faults and hard faults.
Swap available
The amount of swap memory or swap space (also known as paging, which is the on-disk component of the virtual memory system) available.
Swap total
Amount of total swap memory or total swap space (also known as paging, which is the on-disk component of the virtual memory system) for use.
Swap used
The amount of swap memory or swap space (also known as paging, which is the on-disk component of the virtual memory system) used.
NIC packets dropped
Network interface packets dropped on the host
NIC received packets dropped
Network interface received packets dropped on the host
NIC sent packets dropped
Network interface sent packets dropped on the host
NIC packet errors
Network interface packet errors on the host
NIC received packet errors
Network interface received packet errors on a host
NIC sent packet errors
Network interface sent packet errors on the host
OS Service availability
This metric provides the status of the OS service. If the OS service is running, the OS module is reporting "1" as a value of the metric. In any other case, the metric has a value of "0"Note that this metric provides data only from Classic Windows services monitoring (supported only on Windows), currently replaced by the new OS Services monitoring. To learn more, see Classic Windows services monitoring.
OS Process count
This metric shows an average number of processes, over one minute, running on the host. The reported number of processes is based on processes detected by the OS module, read in 10 seconds cycles.
PGI count
This metric shows the number of PGIs created by the OS module every minute. It includes every PGI, even those which are considered not important and are not reported to Dynatrace.
Reported PGI count
This metric shows the number of PGIs created and reported by the OS module every minute. It includes only PGIs, which are considered important and reported to Dynatrace. Important PGIs are those in which OneAgent recognizes the technology, have open network ports, generate significant resource usage, or are created via Declarative process grouping rules. To learn what makes process important, see Which are the most important processes?
z/OS General CPU time
Total General CPU time per minute
z/OS Consumed MSUs per SMF interval (SMF70EDT)
Number of consumed MSUs per SMF interval (SMF70EDT)
z/OS zIIP time
Total zIIP time per minute
z/OS zIIP usage
Actively used zIIP as a percentage of available zIIP
Host availability %
Host availability %
Host uptime
Time since last host boot up. Requires OneAgent 1.259+. The metric is not supported for application-only OneAgent deployments.
Kubernetes: Cluster readyz status
Current status of the Kubernetes API server reported by the /readyz endpoint (0 or 1).
Kubernetes: Container - out of memory (OOM) kill count
This metric measures the out of memory (OOM) kills. The most detailed level of aggregation is container. The value corresponds to the status 'OOMKilled' of a container in the pod resource's container status. The metric is only written if there was at least one container OOM kill.
Kubernetes: Container - restart count
This metric measures the amount of container restarts. The most detailed level of aggregation is container. The value corresponds to the delta of the 'restartCount' defined in the pod resource's container status. The metric is only written if there was at least one container restart.
Kubernetes: Node conditions
This metric describes the status of a Kubernetes node. The most detailed level of aggregation is node.
Kubernetes: Node - CPU allocatable
This metric measures the total allocatable cpu. The most detailed level of aggregation is node. The value corresponds to the allocatable cpu of a node.
Kubernetes: Container - CPU throttled (by node)
This metric measures the total CPU throttling by container. The most detailed level of aggregation is node.
Kubernetes: Container - CPU usage (by node)
This metric measures the total CPU consumed (user usage + system usage) by container. The most detailed level of aggregation is node.
Kubernetes: Pod - CPU limits (by node)
This metric measures the cpu limits. The most detailed level of aggregation is node. The value is the sum of the cpu limits of all app containers of a pod.
Kubernetes: Pod - memory limits (by node)
This metric measures the memory limits. The most detailed level of aggregation is node. The value is the sum of the memory limits of all app containers of a pod.
Kubernetes: PVC - available
This metric measures the number of available bytes in the volume. The most detailed level of aggregation is persistent volume claim.
Kubernetes: PVC - capacity
This metric measures the capacity in bytes of the volume. The most detailed level of aggregation is persistent volume claim.
Kubernetes: PVC - used
This metric measures the number of used bytes in the volume. The most detailed level of aggregation is persistent volume claim.
Kubernetes: Resource quota - CPU limits
This metric measures the cpu limit quota. The most detailed level of aggregation is resource quota. The value corresponds to the cpu limits of a resource quota.
Kubernetes: Resource quota - CPU limits used
This metric measures the used cpu limit quota. The most detailed level of aggregation is resource quota. The value corresponds to the used cpu limits of a resource quota.
Kubernetes: Resource quota - memory limits
This metric measures the memory limit quota. The most detailed level of aggregation is resource quota. The value corresponds to the memory limits of a resource quota.
Kubernetes: Resource quota - memory limits used
This metric measures the used memory limits quota. The most detailed level of aggregation is resource quota. The value corresponds to the used memory limits of a resource quota.
Kubernetes: Resource quota - pod count
This metric measures the pods quota. The most detailed level of aggregation is resource quota. The value corresponds to the pods of a resource quota.
Kubernetes: Resource quota - pod used count
This metric measures the used pods quota. The most detailed level of aggregation is resource quota. The value corresponds to the used pods of a resource quota.
Kubernetes: Workload conditions
This metric describes the status of a Kubernetes workload. The most detailed level of aggregation is workload.
Kubernetes: Pod - desired container count
This metric measures the number of desired containers. The most detailed level of aggregation is workload. The value is the count of all containers in the pod's specification.
Kubernetes: Container - CPU throttled (by workload)
This metric measures the total CPU throttling by container. The most detailed level of aggregation is workload.
Kubernetes: Container - CPU usage (by workload)
This metric measures the total CPU consumed (user usage + system usage) by container. The most detailed level of aggregation is workload.
Kubernetes: Pod - CPU limits (by workload)
This metric measures the cpu limits. The most detailed level of aggregation is workload. The value is the sum of the cpu limits of all app containers of a pod.
Kubernetes: Pod - memory limits (by workload)
This metric measures the memory limits. The most detailed level of aggregation is workload. The value is the sum of the memory limits of all app containers of a pod.
Kubernetes: Container count
This metric measures the number of containers. The most detailed level of aggregation is workload. The metric counts the number of all containers.
Kubernetes: Event count
This metric counts Kubernetes events. The most detailed level of aggregation is the event reason. The value corresponds to the count of events returned by the Kubernetes events endpoint. This metric depends on Kubernetes event monitoring. It will not show any datapoints for the period in which event monitoring is deactivated.
Kubernetes: Node count
This metric measures the number of nodes. The most detailed level of aggregation is cluster. The value is the count of all nodes.
Kubernetes: Pod count (by workload)
This metric measures the number of pods. The most detailed level of aggregation is workload. The value corresponds to the count of all pods.
Kubernetes: Workload count
This metric measures the number of workloads. The most detailed level of aggregation is namespace. The value corresponds to the count of all workloads.
Process availability
Process availability state metric reported in 1 minute intervals
Process availability %
This metric provides the percentage of time when a process is available. It is sent once per minute with a 10-second granularity - six samples are aggregated every minute. If the process is available for a whole minute, the value is 100%. A 0% value indicates that it is not running. It has a "Process" dimension (dt.entity.process_group_instance).
Process traffic in
This metric provides size of incoming traffic of a process. It helps to identify processes generating high network traffic on a host. The result is expressed in kilobytes. It has a "PID" (process.pid), "Parent PID" (process.parent_pid), "process owner" (process.owner), "process executable name" (process.executable.name), "process executable path" (process.executable.path), "process command line" (process.command_line) and "Process group instance" (dt.entity.process_group_instance) dimensions This metric is collected only if the Process instance snapshot feature is turned on and triggered, and the time this metric is collected for is restricted to feature limits. To learn more, see Process instance snapshots.
Process traffic out
This metric provides size of outgoing traffic of a process. It helps to identify processes generating high network traffic on a host. The result is expressed in kilobytes. It has a "PID" (process.pid), "Parent PID" (process.parent_pid), "process owner" (process.owner), "process executable name" (process.executable.name), "process executable path" (process.executable.path), "process command line" (process.command_line) and "Process group instance" (dt.entity.process_group_instance) dimensions This metric is collected only if the Process instance snapshot feature is turned on and triggered, and the time this metric is collected for is restricted to feature limits. To learn more, see Process instance snapshots.
Process average CPU
This metric provides the percentage of the CPU usage of a process. The metric value is the sum of CPU time every process worker uses divided by the total available CPU time. The result is expressed in percentage. A value of 100% indicates that the process uses all available CPU resources of the host. It has a "PID" (process.pid), "Parent PID" (process.parent_pid), "process owner" (process.owner), "process executable name" (process.executable.name), "process executable path" (process.executable.path), "process command line" (process.command_line) and "Process group instance" (dt.entity.process_group_instance) dimensions. This metric is collected only if the Process instance snapshot feature is turned on and triggered, and the time this metric is collected for is restricted to feature limits. To learn more, see Process instance snapshots.
Process memory
This metric provides the memory usage of a process. It helps to identify processes with high memory resource consumption and memory leaks. The result is expressed in bytes. It has a "PID" (process.pid), "Parent PID" (process.parent_pid), "process owner" (process.owner), "process executable name" (process.executable.name), "process executable path" (process.executable.path), "process command line" (process.command_line) and "Process group instance" (dt.entity.process_group_instance) dimensions This metric is collected only if the Process instance snapshot feature is turned on and triggered, and the time this metric is collected for is restricted to feature limits. To learn more, see Process instance snapshots.
Incoming messages
The number of incoming messages on the queue or topic
Outgoing messages
The number of outgoing messages from the queue or topic
New attacks
Number of attacks that were recently created. The metric supports the management zone selector.
New Muted Security Problems (global)
Number of vulnerabilities that were recently muted. The metric value is independent of any configured management zone (and thus global).
New Open Security Problems (global)
Number of vulnerabilities that were recently created. The metric value is independent of any configured management zone (and thus global).
New Open Security Problems (split by Management Zone)
Number of vulnerabilities that were recently created. The metric value is split by management zone.
Open Security Problems (global)
Number of currently open vulnerabilities seen within the last minute. The metric value is independent of any configured management zone (and thus global).
Open Security Problems (split by Management Zone)
Number of currently open vulnerabilities seen within the last minute. The metric value is split by management zone.
New Resolved Security Problems (global)
Number of vulnerabilities that were recently resolved. The metric value is independent of any configured management zone (and thus global).
Vulnerabilities - affected process groups count (global)
Total number of unique affected process groups across all open vulnerabilities per technology. The metric value is independent of any configured management zone (and thus global).
Vulnerabilities - affected not-muted process groups count (global)
Total number of unique affected process groups across all open, unmuted vulnerabilities per technology. The metric value is independent of any configured management zone (and thus global).
Vulnerabilities - affected entities count
Total number of unique affected entities across all open vulnerabilities. The metric supports the management zone selector.
CPU time
CPU time consumed by a particular request. To learn how Dynatrace calculates service timings, see Service analysis timings.
Service CPU time
CPU time consumed by a particular service. To learn how Dynatrace calculates service timings, see Service analysis timings.
Failed connections
Unsuccessful connection attempts compared to all connection attempts. To learn about database analysis, see Analyze database services.
Connection failure rate
Rate of unsuccessful connection attempts compared to all connection attempts. To learn about database analysis, see Analyze database services.
Successful connections
Total number of database connections successfully established by this service. To learn about database analysis, see Analyze database services.
Connection success rate
Rate of successful connection attempts compared to all connection attempts. To learn about database analysis, see Analyze database services.
Total number of connections
Total number of database connections that were attempted to be established by this service. To learn about database analysis, see Analyze database services.
Number of client side errors
Failed requests for a service measured on client side. To learn about failure detection, see Configure service failure detection.
Failure rate (client side errors)
Number of calls without client side errors
Number of HTTP 5xx errors
HTTP requests with a status code between 500 and 599 for a given key request measured on server side. To learn about failure detection, see Configure service failure detection.
Failure rate (HTTP 5xx errors)
Number of calls without HTTP 5xx errors
Request count - client
Number of requests for a given key request - measured on the client side. This metric is written for each key request. To learn more about key requests, see Monitor key request.
Request count - server
Number of requests for a given key request - measured on the server side. This metric is written for each key request. To learn more about key requests, see Monitor key request.
Request count
Number of requests for a given key request. This metric is written for each key request. To learn more about key requests, see Monitor key request.
CPU per request
CPU time for a given key request. This metric is written for each key request. To learn more about key requests, see Monitor key request.
Service key request CPU time
CPU time for a given key request. This metric is written for each key request. To learn more about key requests, see Monitor key request.
Number of client side errors
Failed requests for a given key request measured on client side. To learn about failure detection, see Configure service failure detection.
Unified service mesh request count
Number of service mesh requests received by a given service. To learn how Dynatrace detects services, see Service detection and naming.
Unified service mesh request count (by service)
Number of service mesh requests received by a given service. Reduced dimensions for faster charting. To learn how Dynatrace detects services, see Service detection and naming.
Unified service mesh request failure count
Number of failed service mesh requests received by a given service. To learn how Dynatrace detects service failures, see Configure service failure detection.
Unified service mesh request failure count (by service)
Number of failed service mesh requests received by a given service. Reduced dimensions for faster charting. To learn how Dynatrace detects service failures, see Configure service failure detection.
Unified service mesh request response time
Response time of a service mesh ingress measured in microseconds. To learn how Dynatrace calculates service timings, see Service analysis timings.
Unified service mesh request response time (by service)
Response time of a service mesh ingress measured in microseconds. Reduced dimensions for faster charting. To learn how Dynatrace calculates service timings, see Service analysis timings.
Request count - client
Number of requests received by a given service - measured on the client side. This metric allows service splittings. To learn how Dynatrace detects and analyzes services, see Services.
Request count - server
Number of requests received by a given service - measured on the server side. This metric allows service splittings. To learn how Dynatrace detects and analyzes services, see Services.
Request count
Number of requests received by a given service. This metric allows service splittings. To learn how Dynatrace detects and analyzes services, see Services.
Client side response time
Response time for a given key request per request type - measured on the client side. This metric is written for each key request. To learn more about key requests, see Monitor key request.
Server side response time
Response time for a given key request per request type - measured on the server side. This metric is written for each key request. To learn more about key requests, see Monitor key request.
Client side response time
Server side response time
Response time
Time consumed by a particular service until a response is sent back to the calling application, process, service etc.To learn how Dynatrace calculates service timings, see Service analysis timings.
Success rate (server side)
Total processing time
Total time consumed by a particular request type including asynchronous processing. Time includes the factor that asynchronous processing can still occur after responses are sent. To learn how Dynatrace calculates service timings, see Service analysis timings.
Total processing time
Total time consumed by a particular service including asynchronous processing. Time includes the factor that asynchronous processing can still occur after responses are sent.To learn how Dynatrace calculates service timings, see Service analysis timings.
Number of calls to databases
Time spent in database calls
IO time
Lock time
Number of calls to other services
Action duration - custom action [browser monitor]
The duration of custom actions; split by monitor.
Action duration - custom action (by geolocation) [browser monitor]
The duration of custom actions; split by monitor, geolocation.
Action duration - load action [browser monitor]
The duration of load actions; split by monitor.
Action duration - load action (by geolocation) [browser monitor]
The duration of load actions; split by monitor, geolocation.
Action duration - XHR action [browser monitor]
The duration of XHR actions; split by monitor.
Action duration - XHR action (by geolocation) [browser monitor]
The duration of XHR actions; split by monitor, geolocation.
Availability rate (by location) [HTTP monitor]
The availability rate of HTTP monitors.
Availability rate - excl. maintenance windows (by location) [HTTP monitor]
The availability rate of HTTP monitors excluding maintenance windows.
DNS lookup time (by location) [HTTP monitor]
The time taken to resolve the hostname for a target URL for the sum of all requests.
Duration (by location) [HTTP monitor]
The duration of the sum of all requests.
Execution count (by status) [HTTP monitor]
The number of monitor executions.
DNS lookup time (by request, location) [HTTP monitor]
The time taken to resolve the hostname for a target URL for individual HTTP requests.
Node health status count [synthetic]
The number of private Synthetic nodes and their health status.
Private location health status count [synthetic]
The number of private Synthetic locations and their health status.
Monitor availability [Network Availability monitor]
Monitor availability excluding maintenance windows [Network Availability monitor]
DNS request resolution time [Network Availability request]
Number of successful ICMP packets [Network Availability request]
Number of ICMP packets [Network Availability request]
ICMP request execution time [Network Availability request]
Availability rate (by location) [third-party monitor]
The availability rate of third-party monitors.
Availability rate - excl. maintenance windows (by location) [third-party monitor]
The availability rate of third-party monitors excluding maintenance windows.
Error count [third-party monitor]
The number of detected errors; split by monitor, step, error code.
Error count (by location) [third-party monitor]
The number of detected errors; split by monitor, location, step, error code.
Test quality rate [third-party monitor]
The test quality rate. Calculated by dividing successful steps by the total number of steps executed; split by monitor.
Test quality rate (by location) [third-party monitor]
The test quality rate. Calculated by dividing successful steps by the total number of steps executed; split by monitor, location.
.NET garbage collection (# Gen 0)
Number of completed GC runs that collected objects in Gen0 Heap within the given time range, https://dt-url.net/i1038bq
.NET garbage collection (# Gen 1)
Number of completed GC runs that collected objects in Gen1 Heap within the given time range, https://dt-url.net/i1038bq
.NET garbage collection (# Gen 2)
Number of completed GC runs that collected objects in Gen2 Heap within the given time range, https://dt-url.net/i1038bq
.NET % time in GC
Percentage time spend within garbage collection
.NET % time in JIT
.NET % time in Just in Time compilation
.NET average number of active threads
Blocks number
Cache capacity
Cache used
Remaining capacity
Total capacity
Used capacity
Max active
Max active (global)
Max total
Max total (global)
Num active
Num active (global)
cluster basicStats diskFetches
cluster count membase
cluster count memcached
cluster samples cmd_get
cluster samples cmd_set
cluster samples curr_items
Custom Device Count
Documents count
Deleted documents
Field data evictions
Field data size
Query cache count
Query cache size
Process group total CPU time during GC suspensions
This metric provides statistics about CPU usage for process groups of garbage-collected technologies. The metric value is the sum of CPU time used during garbage collector suspensions for every process (including its workers) in a process group. It has a "Process Group" dimension.
Process group total CPU time
This metric provides the total CPU time used by a process group. The metric value is the sum of CPU time every process (including its workers) of the process group uses. The result is expressed in microseconds. It can help to identify the most CPU-intensive technologies in the monitored environment. It has a "Process Group" dimension.
Process total CPU time during GC suspensions
This metric provides statistics about CPU usage for garbage-collected processes. The metric value is the sum of CPU time used during garbage collector suspensions for all process workers. It has a "Process" dimension (dt.entity.process_group_instance).
Process total CPU time
This metric provides the CPU time used by a process. The metric value is the sum of CPU time every process worker uses. The result is expressed in microseconds. It has a "Process" dimension (dt.entity.process_group_instance).
Process CPU usage
This metric provides the percentage of the CPU usage of a process. The metric value is the sum of CPU time every process worker uses divided by the total available CPU time. The result is expressed in percentage. A value of 100% indicates that the process uses all available CPU resources of the host. It has a "Process" dimension (dt.entity.process_group_instance).
z/OS General CPU time
The time spent on the general-purpose central processor (GCP) after process start per minute
Go: 502 responses
The number of responses that indicate invalid service responses produced by an application.
Go: Response latency
The average response time from the application to clients.
Go: 5xx responses
The number of responses that indicate repeatedly crashing apps or response issues from applications.
Go: Total requests
The number of all requests representing the overall traffic flow.
Go: Heap idle size
The amount of memory not assigned to the heap or stack. Idle memory can be returned to the operating system or retained by the Go runtime for later reassignment to the heap or stack.
Go: Heap live size
The amount of memory considered live by the Go garbage collector. This metric accumulates memory retained by the most recent garbage collector run and allocated since then.
JVM loaded classes
The number of classes that are currently loaded in the Java virtual machine, https://dt-url.net/l2c34jw
JVM total number of loaded classes
The total number of classes that have been loaded since the Java virtual machine has started execution, https://dt-url.net/d0y347x
JVM unloaded classes
The total number of classes unloaded since the Java virtual machine has started execution, https://dt-url.net/d7g34bi
Garbage collection total activation count
The total number of collections that have occurred for all pools, https://dt-url.net/oz834vd
Garbage collection total collection time
The approximate accumulated collection elapsed time in milliseconds for all pools, https://dt-url.net/oz834vd
Garbage collection suspension time
Time spent in milliseconds between GC pause starts and GC pause ends, https://dt-url.net/zj434js
Kafka broker - Leader election rate
Kafka broker - Unclean election rate
Kafka controller - Active cluster controllers
Kafka controller - Offline partitions
Kafka broker - Partitions
Kafka broker - Under replicated partitions
Bytes received
Bytes received
Bytes transmitted
Bytes transmitted
Retransmitted packets
Number of retransmitted packets
Packets received
Number of packets received
Packets transmitted
Number of packets transmitted
Retransmission
Percentage of retransmitted packets
Nginx Plus cache free space
Nginx Plus cache hit ratio
Nginx Plus cache hits
Nginx Plus cache misses
Nginx Plus cache used space
Active Nginx Plus server zones
Node.js: Active handles
Average number of active handles in the event loop
Node.js: Event loop tick frequency
Average number of event loop iterations (per 10 seconds interval)
Node.js: Event loop latency
Average latency of expected event completion
Node.js: Work processed latency
Average latency of a work item being enqueued and callback being called
Node.js: Event loop tick duration
Average duration of an event loop iteration (tick)
Node.js: Event loop utilization
Event loop utilization represents the percentage of time the event loop has been active
Background CPU usage
Foreground CPU usage
CPU idle
CPU other processes
Physical read bytes
Physical write bytes
PHP GC collected count
PHP GC collection duration
PHP GC effectiveness
PHP OPCache JIT buffer free
PHP OPCache JIT buffer size
PHP OPCache free memory
Python GC collected items from gen 0
Python GC collected items from gen 1
Python GC collected items from gen 2
Python GC collections number in gen 0
Python GC collections number in gen 1
Python GC collections number in gen 2
cluster channels
cluster connections
cluster consumers
cluster exchanges
cluster ack messages
cluster delivered and get messages
Cache hit ratio
Cache hits for passes
Cache hits
Cache misses
Cache passes
Backend connections
Dropped connections
Number of dropped connections
Handled connections
Number of successfully finished and closed requests
Reading connections
Number of connections which are receiving data from the client
Socket backlog waiting time
Average time needed to queue and handle incoming connections
Waiting connections
Number of connections with no active requests
Writing connections
Number of connections which are sending data to the client
Free pool size
Percent used
Pool size
In use time
Wait time
Number of waiting threads
z/OS Consumed Service Units per minute
The calculated number of consumed Service Units per minute