Built-in classic metrics

Each Dynatrace-supported technology offers multiple "built-in" metrics. Built-in metrics are included in the product out of the box, in some cases as part of built-in extensions.

Metrics that are based on OneAgent or ActiveGate extensions (prefix ext:) and calculated metrics (prefix calc:) are custom metrics, not built-in metrics; DDU consumption for these metrics can vary widely depending on how you use Dynatrace.

The ext: prefix is used by metrics from OneAgent extensions and ActiveGate extensions, and also by classic metrics for AWS integration.

Despite the naming similarities, AWS integration metrics are not based on extensions.

To view all the metrics available in your environment, use the GET metrics API call. We recommend the following query parameters:

  • pageSize=500—to obtain the largest possible number of metrics in one response.
  • fields=displayName,unit,aggregationTypes,dduBillable—to obtain the same set of fields as you see in these tables.
  • Depending on which metrics you want to query, one of the following values for the metricSelector parameter:
    • metricSelector=ext:*—to obtain all metrics coming from extensions.
    • metricSelector=calc:*—to obtain all calculated metrics.
    • Omit the parameter to obtain all the metrics of your environment.

The sections below describe inconsistencies or limitations identified for Dynatrace built-in metrics.

The Other applications metrics section contains metrics captured for mobile and custom applications. These metrics, which start with builtin:apps.other, are captured without the indication whether it's a mobile or a custom application. However, the "billing" application metrics, which start with builtin:billing.apps, are split for these application types:

  • Mobile apps:

    • builtin:billing.apps.mobile.sessionsWithoutReplayByApplication
    • builtin:billing.apps.mobile.sessionsWithReplayByApplication
    • builtin:billing.apps.mobile.userActionPropertiesByMobileApplication
  • Custom apps:

    • builtin:billing.apps.custom.sessionsWithoutReplayByApplication
    • builtin:billing.apps.custom.userActionPropertiesByDeviceApplication

The following "billing" metrics for session count are actually the sum of billed and unbilled user sessions.

  • builtin:billing.apps.custom.sessionsWithoutReplayByApplication
  • builtin:billing.apps.mobile.sessionsWithReplayByApplication
  • builtin:billing.apps.mobile.sessionsWithoutReplayByApplication
  • builtin:billing.apps.web.sessionsWithReplayByApplication
  • builtin:billing.apps.web.sessionsWithoutReplayByApplication

If you want to get only the number of billed sessions, set the Type filter to Billed.

Different measurement units are used for similar request duration metrics for mobile and custom apps.

builtin:apps.other.keyUserActions.requestDuration.os is measured in microseconds while other request duration metrics (builtin:apps.other.requestTimes.osAndVersion and builtin:apps.other.requestTimes.osAndProvider) are measured in milliseconds.

Custom metrics are defined or installed by the user, while built-in metrics are by default part of the product. Certain built-in metrics are disabled by default and, if turned on, will consume DDUs.These metrics cover a wide range of supported technologies, including Apache Tomcat, NGINX, Couchbase, RabbitMQ, Cassandra, Jetty, and many others.

A custom metric is a new type of metric that offers a user-provided metric identifier and unit of measure. The semantics of custom metrics are defined by you and aren't included in the default OneAgent installation. Custom metrics are sent to Dynatrace through various interfaces. Following the definition of a custom metric, the metric can be reported for multiple monitored components. Each component’s custom metric results in a separate timeseries.

For example, if you define a new custom metric called Files count that counts the newly created files within a directory, this new metric can be collected either for one host or for two individual hosts. Collecting the same metric for two individual hosts results in two timeseries of the same custom metric type, as shown in the example below:

Custom metrics

For the purposes of calculating monitoring consumption, collecting the same custom metric for two hosts counts as two separate custom metrics.

Applications

Custom

Metric key
Name and description
Unit
Aggregations
builtin:apps.custom.reportedErrorCount

Reported error count (by OS, app version) [custom]

The number of all reported errors.

Count
autovalue
builtin:apps.custom.sessionCount

Session count (by OS, app version) [custom]

The number of captured user sessions.

Count
autovalue

Mobile

Metric key
Name and description
Unit
Aggregations
builtin:apps.mobile.sessionCount

Session count (by OS, app version, crash replay feature status) [mobile]

The number of captured user sessions.

Count
autovalue
builtin:apps.mobile.sessionCount.sessionReplayStatus

Session count (by OS, app version, full replay feature status) [mobile]

The number of captured user sessions.

Count
autovalue
builtin:apps.mobile.reportedErrorCount

Reported error count (by OS, app version) [mobile]

The number of all reported errors.

Count
autovalue

Web applications

Metric key
Name and description
Unit
Aggregations
builtin:apps.web.action.affectedUas

User action rate - affected by JavaScript errors (by key user action, user type) [web]

The percentage of key user actions with detected JavaScript errors.

Percent (%)
autovalue
builtin:apps.web.action.apdex

Apdex (by key user action) [web]

The average Apdex rating for key user actions.

autoavg
builtin:apps.web.action.count.custom.browser

Action count - custom action (by key user action, browser) [web]

The number of custom actions that are marked as key user actions.

Count
autovalue
builtin:apps.web.action.count.load.browser

Action count - load action (by key user action, browser) [web]

The number of load actions that are marked as key user actions.

Count
autovalue
builtin:apps.web.action.count.xhr.browser

Action count - XHR action (by key user action, browser) [web]

The number of XHR actions that are marked as key user actions.

Count
autovalue
builtin:apps.web.action.cumulativeLayoutShift.load.userType

Cumulative Layout Shift - load action (by key user action, user type) [web]

The score measuring the unexpected shifting of visible webpage elements. Calculated for load actions that are marked as key user actions.

autoavgcountmaxmedianminpercentilesum

Mobile and custom apps

Metric key
Name and description
Unit
Aggregations
builtin:apps.other.apdex.osAndGeo

Apdex (by OS, geolocation) [mobile, custom]

The Apdex rating for all captured user actions.

autovalue
builtin:apps.other.apdex.osAndVersion

Apdex (by OS, app version) [mobile, custom]

The Apdex rating for all captured user actions.

autovalue
builtin:apps.other.crashAffectedUsers.os

User count - estimated users affected by crashes (by OS) [mobile, custom]

The estimated number of unique users affected by a crash. For this high cardinality metric, the HyperLogLog algorithm is used to approximate the number of users.

Count
autovalue
builtin:apps.other.crashAffectedUsers.osAndVersion-std

User count - estimated users affected by crashes (by OS, app version) [mobile, custom]

The estimated number of unique users affected by a crash. For this high cardinality metric, the HyperLogLog algorithm is used to approximate the number of users.

Count
autovalue
builtin:apps.other.crashAffectedUsersRate.os

User rate - estimated users affected by crashes (by OS) [mobile, custom]

The estimated percentage of unique users affected by a crash. For this high cardinality metric, the HyperLogLog algorithm is used to approximate the number of users.

Percent (%)
autovalue
builtin:apps.other.crashCount.osAndGeo

Crash count (by OS, geolocation) [mobile, custom]

The number of detected crashes.

Count
autovalue

Billing

Applications

Metric key
Name and description
Unit
Aggregations
builtin:billing.apps.custom.sessionsWithoutReplayByApplication

Session count - billed and unbilled [custom]

The number of billed and unbilled user sessions. To get only the number of billed sessions, set the "Type" filter to "Billed".

Count
autovalue
builtin:billing.apps.custom.userActionPropertiesByDeviceApplication

Total user action and session properties

The number of billed user action and user session properties.

Count
autovalue
builtin:billing.apps.mobile.sessionsWithReplayByApplication

Session count - billed and unbilled - with Session Replay [mobile]

The number of billed and unbilled user sessions that include Session Replay data. To get only the number of billed sessions, set the "Type" filter to "Billed".

Count
autovalue
builtin:billing.apps.mobile.sessionsWithoutReplayByApplication

Session count - billed and unbilled [mobile]

The total number of billed and unbilled user sessions (with and without Session Replay data). To get only the number of billed sessions, set the "Type" filter to "Billed".

Count
autovalue
builtin:billing.apps.mobile.userActionPropertiesByMobileApplication

Total user action and session properties

The number of billed user action and user session properties.

Count
autovalue
builtin:billing.apps.web.sessionsWithReplayByApplication

Session count - billed and unbilled - with Session Replay [web]

The number of billed and unbilled user sessions that include Session Replay data. To get only the number of billed sessions, set the "Type" filter to "Billed".

Count
autovalue

Custom events classic

Metric key
Name and description
Unit
Aggregations
builtin:billing.custom_events_classic.usage

(DPS) Total Custom Events Classic billing usage

The number of custom events ingested aggregated over all monitored entities. Custom events include events sent to Dynatrace via the Events API or events created by a log event extraction rule. Use this total metric to query longer timeframes without losing precision or performance.

Count
autovalue
builtin:billing.custom_events_classic.usage_by_entity

(DPS) Custom Events Classic billing usage by monitored entity

The number of custom events ingested split by monitored entity. Custom events include events sent to Dynatrace via the Events API or events created by a log event extraction rule. For details on the events billed, refer to the usage_by_event_info metric. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.

Count
autovalue
builtin:billing.custom_events_classic.usage_by_event_info

(DPS) Custom Events Classic billing usage by event info

The number of custom events ingested split by event info. Custom events include events sent to Dynatrace via the Events API or events created by a log event extraction rule. The info contains the context of the event plus the configuration ID. For details on the related monitored entities, refer to the usage_by_entity metric. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.

Count
autovalue

Custom metrics classic

Metric key
Name and description
Unit
Aggregations
builtin:billing.custom_metrics_classic.raw.usage_by_metric_key

(DPS) Recorded metric data points per metric key

The number of reported metric data points split by metric key. This metric does not account for included metric data points available to your environment.

Count
autovalue
builtin:billing.custom_metrics_classic.usage

(DPS) Total billed metric data points

The total number of metric data points after deducting the included metric data points. This is the rate-card value used for billing. Use this total metric to query longer timeframes without losing precision or performance.

Count
autovalue
builtin:billing.custom_metrics_classic.usage.foundation_and_discovery

(DPS) Total metric data points billable for Foundation & Discovery hosts

The number of metric data points billable for Foundation & Discovery hosts.

Count
autovalue
builtin:billing.custom_metrics_classic.usage.fullstack_hosts

(DPS) Total metric data points billed for Full-Stack hosts

The number of metric data points billed for Full-Stack hosts. To view the unadjusted usage per host, use builtin:billing.full_stack_monitoring.metric_data_points.ingested_by_host . This trailing metric is reported at 15-minute intervals with up to a 15-minute delay.

Count
autovalue
builtin:billing.custom_metrics_classic.usage.infrastructure_hosts

(DPS) Total metric data points billed for Infrastructure-monitored hosts

The number of metric data points billed for Infrastructure-monitored hosts. To view the unadjusted usage per host, use builtin:billing.infrastructure_monitoring.metric_data_points.ingested_by_host . This trailing metric is reported at 15-minute intervals with up to a 15-minute delay.

Count
autovalue
builtin:billing.custom_metrics_classic.usage.other

(DPS) Total metric data points billed by other entities

The number of metric data points billed that cannot be assigned to a host. The values reported in this metric are not eligible for included metric deduction and will be billed as is. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. o view the monitored entities that consume this usage, use the other_by_entity metric.

Count
autovalue

Custom traces classic

Metric key
Name and description
Unit
Aggregations
builtin:billing.custom_traces_classic.usage

(DPS) Total Custom Traces Classic billing usage

The number of spans ingested aggregated over all monitored entities. A span is a single operation within a distributed trace, ingested into Dynatrace. Use this total metric to query longer timeframes without losing precision or performance.

Count
autovalue
builtin:billing.custom_traces_classic.usage_by_entity

(DPS) Custom Traces Classic billing usage by monitored entity

The number of spans ingested split by monitored entity. A span is a single operation within a distributed trace, ingested into Dynatrace. For details on span types, refer to the usage_by_span_type metric. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.

Count
autovalue
builtin:billing.custom_traces_classic.usage_by_span_type

(DPS) Custom Traces Classic billing usage by span type

The number of spans ingested split by span type. A span is a single operation within a distributed trace, ingested into Dynatrace. Span kinds can be CLIENT, SERVER, PRODUCER, CONSUMER or INTERNAL For details on the related monitored entities, refer to the usage_by_entity metric. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.

Count
autovalue

DDU

Metric key
Name and description
Unit
Aggregations
builtin:billing.ddu.events.byDescription

DDU events consumption by event info

License consumption of Davis data units by events pool split by event info

autovalue
builtin:billing.ddu.events.byEntity

DDU events consumption by monitored entity

License consumption of Davis data units by events pool split by monitored entity

autovalue
builtin:billing.ddu.events.total

Total DDU events consumption

Sum of license consumption of Davis data units aggregated over all monitored entities for the events pool

autovalue
builtin:billing.ddu.log.byDescription

DDU log consumption by log path

License consumption of Davis data units by log pool split by log path

autovalue
builtin:billing.ddu.log.byEntity

DDU log consumption by monitored entity

License consumption of Davis data units by log pool split by monitored entity

autovalue
builtin:billing.ddu.log.total

Total DDU log consumption

Sum of license consumption of Davis data units aggregated over all logs for the log pool

autovalue

Events

Metric key
Name and description
Unit
Aggregations
builtin:billing.events.business_events.ingest.usage

[Deprecated] (DPS) Business events usage - Ingest & Process

Business events Ingest & Process usage, tracked as bytes ingested within the hour. This trailing metric is reported hourly for the previous hour. Metric values are reported with up to a one-hour delay. [Deprecated] This metric is replaced by billing usage events.

Byte
autovalue
builtin:billing.events.business_events.query.usage

[Deprecated] (DPS) Business events usage - Query

Business events Query usage, tracked as bytes read within the hour. This trailing metric is reported hourly for the previous hour. Metric values are reported with up to a one-hour delay. [Deprecated] This metric is replaced by billing usage events.

Byte
autovalue
builtin:billing.events.business_events.retain.usage

[Deprecated] (DPS) Business events usage - Retain

Business events Retain usage, tracked as total storage used within the hour, in bytes. This trailing metric is reported hourly for the previous hour. Metric values are reported with up to a one-hour delay. [Deprecated] This metric is replaced by billing usage events.

Byte
autoavgmaxmin

Foundation and discovery

Metric key
Name and description
Unit
Aggregations
builtin:billing.foundation_and_discovery.metric_data_points.ingested

(DPS) Ingested metric data points for Foundation & Discovery

The number of metric data points aggregated over all Foundation & Discovery hosts.

Count
autovalue
builtin:billing.foundation_and_discovery.metric_data_points.ingested_by_host

(DPS) Ingested metric data points for Foundation & Discovery per host

The number of metric data points split by Foundation & Discovery hosts.

Count
autovalue
builtin:billing.foundation_and_discovery.usage

(DPS) Foundation & Discovery billing usage

The total number of host-hours being monitored by Foundation & Discovery, counted in 15 min intervals.

Count
autovalue
builtin:billing.foundation_and_discovery.usage_per_host

(DPS) Foundation & Discovery billing usage per host

The host-hours being monitored by Foundation & Discovery, counted in 15 min intervals.

Count
autovalue

Full stack monitoring

Metric key
Name and description
Unit
Aggregations
builtin:billing.full_stack_monitoring.metric_data_points.included

(DPS) Available included metric data points for Full-Stack hosts

The total number of included metric data points that can be deducted from the metric data points reported by Full-Stack hosts. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. To view the number of applied included metric data points, use builtin:billing.full_stack_monitoring.metric_data_points.included_used . If the difference between this metric and the applied metrics is greater than 0, then more metrics can be ingested using Full-Stack Monitoring without incurring additional costs.

Count
autovalue
builtin:billing.full_stack_monitoring.metric_data_points.included_used

(DPS) Used included metric data points for Full-Stack hosts

The number of consumed included metric data points per host monitored with Full-Stack Monitoring. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. To view the number of potentially available included metrics, use builtin:billing.full_stack_monitoring.metric_data_points.included_used . If the difference between this metric and the available metrics is greater than zero, then that means that more metrics could be ingested on Full-Stack hosts without incurring additional costs.

Count
autovalue
builtin:billing.full_stack_monitoring.metric_data_points.ingested

(DPS) Total metric data points reported by Full-Stack hosts

The number of metric data points aggregated over all Full-Stack hosts. The values reported in this metric are eligible for included-metric-data-point deduction. Use this total metric to query longer timeframes without losing precision or performance. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. To view usage on a per-host basis, use builtin:billing.full_stack_monitoring.metric_data_points.ingested_by_host .

Count
autovalue
builtin:billing.full_stack_monitoring.metric_data_points.ingested_by_host

(DPS) Metric data points reported and split by Full-Stack hosts

The number of metric data points split by Full-Stack hosts. The values reported in this metric are eligible for included-metric-data-point deduction. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. The pool of available included metrics for a "15-minute interval" is visible via builtin:billing.full_stack_monitoring.metric_data_points.included . To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.

Count
autovalue
builtin:billing.full_stack_monitoring.usage

(DPS) Full-Stack Monitoring billing usage

The total GiB memory of hosts being monitored in full-stack mode, counted in 15 min intervals. Use this total metric to query longer timeframes without losing precision or performance. For details on the hosts causing the usage, refer to the usage_per_host metric. For details on the containers causing the usage, refer to the usage_per_container metric.

GibiByte
autovalue
builtin:billing.full_stack_monitoring.usage_per_container

(DPS) Full-stack usage by container type

The total GiB memory of containers being monitored in full-stack mode, counted in 15 min intervals.

GibiByte
autovalue

Infrastructure monitoring

Metric key
Name and description
Unit
Aggregations
builtin:billing.infrastructure_monitoring.metric_data_points.included

(DPS) Available included metric data points for Infrastructure-monitored hosts

The total number of included metric data points that can be deducted from the metric data points reported by Infrastructure-monitored hosts. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. To view the number of applied included metric data points, use builtin:billing.infrastructure_monitoring.metric_data_points.included_used . If the difference between this metric and the applied metrics is greater than zero, then that means that more metrics could be ingested on Infrastructure-monitored hosts without incurring additional costs.

Count
autovalue
builtin:billing.infrastructure_monitoring.metric_data_points.included_used

(DPS) Used included metric data points for Infrastructure-monitored hosts

The number of consumed included metric data points for Infrastructure-monitored hosts. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. To view the number of potentially available included metrics, use builtin:billing.infrastructure_monitoring.metric_data_points.included_used . If the difference between this metric and the available metrics is greater than zero, then that means that more metrics could be ingested on Infrastructure-monitored hosts without incurring additional costs.

Count
autovalue
builtin:billing.infrastructure_monitoring.metric_data_points.ingested

(DPS) Total metric data points reported by Infrastructure-monitored hosts

The number of metric data points aggregated over all Infrastructure-monitored hosts. The values reported in this metric are eligible for included-metric-data-point deduction. Use this total metric to query longer timeframes without losing precision or performance. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. To view usage on a per-host basis, use the builtin:billing.full_stack_monitoring.metric_data_points.ingested_by_host .

Count
autovalue
builtin:billing.infrastructure_monitoring.metric_data_points.ingested_by_host

(DPS) Metric data points reported and split by Infrastructure-monitored hosts

The number of metric data points split by Infrastructure-monitored hosts. The values reported in this metric are eligible for included-metric-data-point deduction. This trailing metric is reported at 15-minute intervals with up to a 15-minute delay. The pool of available included metrics for a "15-minute interval" is visible via builtin:billing.infrastructure_monitoring.metric_data_points.included . To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.

Count
autovalue
builtin:billing.infrastructure_monitoring.usage

(DPS) Infrastructure Monitoring billing usage

The total number of host-hours being monitored in infrastructure-only mode, counted in 15 min intervals. Use this total metric to query longer timeframes without losing precision or performance. For details on the hosts causing the usage, refer to the usage_per_host metric.

Count
autovalue
builtin:billing.infrastructure_monitoring.usage_per_host

(DPS) Infrastructure Monitoring billing usage per host

The host-hours being monitored in infrastructure-only mode, counted in 15 min intervals. A host monitored for the whole hour has 4 data points with a value of 0.25, regardless of the memory size. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.

Count
autovalue

Kubernetes monitoring

Metric key
Name and description
Unit
Aggregations
builtin:billing.kubernetes_monitoring.usage

(DPS) Kubernetes Platform Monitoring billing usage

The total number of monitored Kubernetes pods per hour, split by cluster and namespace and counted in 15 min intervals. A pod monitored for the whole hour has 4 data points with a value of 0.25.

Count
autovalue

Log

Metric key
Name and description
Unit
Aggregations
builtin:billing.log.ingest.usage

(DPS) Log Management and Analytics usage - Ingest & Process

Log Management and Analytics Ingest & Process usage, tracked as bytes ingested within the hour. This trailing metric is reported hourly for the previous hour. Metric values are reported with up to a one-hour delay.

Byte
autovalue
builtin:billing.log.query.usage

(DPS) Log Management and Analytics usage - Query

Log Management and Analytics Query usage, tracked as bytes read within the hour. This trailing metric is reported hourly for the previous hour. Metric values are reported with up to a one-hour delay.

Byte
autovalue
builtin:billing.log.retain.usage

(DPS) Log Management and Analytics usage - Retain

Log Management and Analytics Retain usage, tracked as total storage used within the hour, in bytes. This trailing metric is reported hourly for the previous hour. Metric values are reported with up to a one-hour delay.

Byte
autoavgmaxmin

Log monitoring classic

Metric key
Name and description
Unit
Aggregations
builtin:billing.log_monitoring_classic.usage

(DPS) Total Log Monitoring Classic billing usage

The number of log records ingested aggregated over all monitored entities. A log record is recognized by either a timestamp or a JSON object. Use this total metric to query longer timeframes without losing precision or performance.

Count
autovalue
builtin:billing.log_monitoring_classic.usage_by_entity

(DPS) Log Monitoring Classic billing usage by monitored entity

The number of log records ingested split by monitored entity. A log record is recognized by either a timestamp or a JSON object. For details on the log path, refer to the usage_by_log_path metric. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.

Count
autovalue
builtin:billing.log_monitoring_classic.usage_by_log_path

(DPS) Log Monitoring Classic billing usage by log path

The number of log records ingested split by log path. A log record is recognized by either a timestamp or a JSON object. For details on the related monitored entities, refer to the usage_by_entity metric. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.

Count
autovalue

Mainframe monitoring

Metric key
Name and description
Unit
Aggregations
builtin:billing.mainframe_monitoring.usage

(DPS) Mainframe Monitoring billing usage

The total number of MSU-hours being monitored, counted in 15 min intervals.

MSU
autovalue

Real user monitoring

Metric key
Name and description
Unit
Aggregations
builtin:billing.real_user_monitoring.mobile.property.usage

(DPS) Total Real-User Monitoring Property (mobile) billing usage

(Mobile) User action and session properties count. For details on how usage is calculated, refer to the documentation or builtin:billing.real_user_monitoring.web.property.usage_by_application . Use this total metric to query longer timeframes without losing precision or performance.

Count
autovalue
builtin:billing.real_user_monitoring.mobile.property.usage_by_application

(DPS) Real-User Monitoring Property (mobile) billing usage by application

(Mobile) User action and session properties count by application. The billed value is calculated based on the number of sessions reported in builtin:billing.real_user_monitoring.mobile.session.usage_by_app + builtin:billing.real_user_monitoring.mobile.session_with_replay.usage_by_app . plus the number of configured properties that exceed the included number of properties (free of charge) offered for a given application. Data points are only written for billed sessions. If the value is 0, you have available metric data points. This trailing metric is reported hourly for the previous hour. Metric values are reported with up to a one-hour delay. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.

Count
autovalue
builtin:billing.real_user_monitoring.mobile.session.usage

(DPS) Total Real-User Monitoring (mobile) billing usage

(Mobile) Session count without Session Replay. The value billed for each session is the session duration measured in hours. So a 3-hour session results in a single data-point value of 3. If two sessions end during the same minute, then the values are added together. Use this total metric to query longer timeframes without losing precision or performance. To view the application that consume this usage, refer to the usage_by_app metric.

Count
autovalue
builtin:billing.real_user_monitoring.mobile.session.usage_by_app

(DPS) Real-User Monitoring (mobile) billing usage by application

(Mobile) Session count without Session Replay split by application The value billed for each session is the session duration measured in hours. So a 3-hour session results in a single data-point value of 3. If two sessions of the same application end during the same minute, then the values are added together. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.

Count
autovalue
builtin:billing.real_user_monitoring.mobile.session_with_replay.usage

(DPS) Total Real-User Monitoring (mobile) with Session Replay billing usage

(Mobile) Session count with Session Replay. The value billed for each session is the session duration measured in hours. So a 3-hour session results in a single data-point value of 3. If two sessions end during the same minute, then the values are added together. Use this total metric to query longer timeframes without losing precision or performance. To view the application that consume this usage, refer to the usage_by_app metric.

Count
autovalue
builtin:billing.real_user_monitoring.mobile.session_with_replay.usage_by_app

(DPS) Real-User Monitoring (mobile) with Session Replay billing usage by application

(Mobile) Session count with Session Replay split by application. The value billed for each session is the session duration measured in hours. So a 3-hour session results in a single data-point value of 3. If two sessions of the same application end during the same minute, then the values are added together. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.

Count
autovalue

Runtime application protection

Metric key
Name and description
Unit
Aggregations
builtin:billing.runtime_application_protection.usage

(DPS) Runtime Application Protection billing usage

Total GiB-memory of hosts protected with Runtime Application Protection (Application Security), counted at 15-minute intervals. Use this total metric to query longer timeframes without losing precision or performance. For details on the monitored hosts, refer to the usage_per_host metric.

GibiByte
autovalue
builtin:billing.runtime_application_protection.usage_per_host

(DPS) Runtime Application Protection billing usage per host

GiB-memory per host protected with Runtime Application Protection (Application Security), counted at 15-minute intervals. For example, a host with 8 GiB of RAM monitored for 1 hour has 4 data points with a value of 2. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.

GibiByte
autovalue

Runtime vulnerability analytics

Metric key
Name and description
Unit
Aggregations
builtin:billing.runtime_vulnerability_analytics.usage

(DPS) Runtime Vulnerability Analytics billing usage

Total GiB-memory of hosts protected with Runtime Vulnerability Analytics (Application Security), counted at 15-minute intervals. Use this total metric to query longer timeframes without losing precision or performance. For details on the monitored hosts, refer to the usage_per_host metric.

GibiByte
autovalue
builtin:billing.runtime_vulnerability_analytics.usage_per_host

(DPS) Runtime Vulnerability Analytics billing usage per host

GiB-memory per hosts protected with Runtime Vulnerability Analytics (Application Security), counted at 15-minute intervals. For example, a host with 8 GiB of RAM monitored for 1 hour has 4 data points with a value of 2. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.

GibiByte
autovalue

Serverless functions classic

Metric key
Name and description
Unit
Aggregations
builtin:billing.serverless_functions_classic.usage

(DPS) Total Serverless Functions Classic billing usage

The number of invocations of the serverless function aggregated over all monitored entities. The term "function invocations" is equivalent to "function requests" or "function executions". Use this total metric to query longer timeframes without losing precision or performance.

Count
autovalue
builtin:billing.serverless_functions_classic.usage_by_entity

(DPS) Serverless Functions Classic billing usage by monitored entity

The number of invocations of the serverless function split by monitored entity. The term "function invocations" is equivalent to "function requests" or "function executions". For details on which functions are invoked, refer to the usage_by_function metric. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.

Count
autovalue
builtin:billing.serverless_functions_classic.usage_by_function

(DPS) Serverless Functions Classic billing usage by function

The number of invocations of the serverless function split by function. The term "function invocations" is equivalent to "function requests" or "function executions". For details on the related monitored entities, refer to the usage_by_entity metric. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.

Count
autovalue

Synthetic

Metric key
Name and description
Unit
Aggregations
builtin:billing.synthetic.actions

Actions

The number of billed actions consumed by browser monitors.

Count
autovalue
builtin:billing.synthetic.actions.usage

(DPS) Total Browser Monitor or Clickpath billing usage

The number of synthetic actions which triggers a web request that includes a page load, navigation event, or action that triggers an XHR or Fetch request. Scroll downs, keystrokes, or clicks that don't trigger web requests aren't counted as such actions. Use this total metric to query longer timeframes without losing precision or performance.

Count
autovalue
builtin:billing.synthetic.actions.usage_by_browser_monitor

(DPS) Browser Monitor or Clickpath billing usage per synthetic browser monitor

The number of synthetic actions which triggers a web request that includes a page load, navigation event, or action that triggers an XHR or Fetch request. Scroll downs, keystrokes, or clicks that don't trigger web requests aren't counted as such actions. Actions are split by the Synthetic Browser Monitors that caused them. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.

Count
autovalue
builtin:billing.synthetic.external

Third-party results

The number of billed results consumed by third-party monitors.

Count
autovalue
builtin:billing.synthetic.external.usage

(DPS) Total Third-Party Synthetic API Ingestion billing usage

The number of synthetic test results pushed into Dynatrace with Synthetic 3rd party API. Use this total metric to query longer timeframes without losing precision or performance.

Count
autovalue
builtin:billing.synthetic.external.usage_by_third_party_monitor

(DPS) Third-Party Synthetic API Ingestion billing usage per external browser monitor

The number of synthetic test results pushed into Dynatrace with Synthetic 3rd party API. The ingestions are split by external Synthetic Browser Monitors for which the results where ingested. To improve performance and avoid exceeding query limits when working with longer timeframes, use the total metric.

Count
autovalue

Cloud

AWS

Metric key
Name and description
Unit
Aggregations
builtin:cloud.aws.az.running

Number of running EC2 instances (AZ)

Count
autoavgmaxmin

Azure

Metric key
Name and description
Unit
Aggregations
builtin:cloud.azure.region.vms.initializing

Number of starting VMs in region

Count
autoavgmaxmin
builtin:cloud.azure.region.vms.running

Number of active VMs in region

Count
autoavgmaxmin
builtin:cloud.azure.region.vms.stopped

Number of stopped VMs in region

Count
autoavgmaxmin
builtin:cloud.azure.vmScaleSet.vms.initializing

Number of starting VMs in scale set

Count
autoavgmaxmin
builtin:cloud.azure.vmScaleSet.vms.running

Number of active VMs in scale set

Count
autoavgmaxmin
builtin:cloud.azure.vmScaleSet.vms.stopped

Number of stopped VMs in scale set

Count
autoavgmaxmin

Cloud Foundry

Metric key
Name and description
Unit
Aggregations
builtin:cloud.cloudfoundry.auctioneer.fetchDuration

CF: Time to fetch cell states

The time that the auctioneer took to fetch state from all the cells when running its auction.

Nanosecond
autoavgmaxmin
builtin:cloud.cloudfoundry.auctioneer.lprFailed

CF: App instance placement failures

The number of application instances that the auctioneer failed to place on Diego cells.

Count
autovalue
builtin:cloud.cloudfoundry.auctioneer.lprStarted

CF: App instance starts

The number of application instances that the auctioneer successfully placed on Diego cells.

Count
autovalue
builtin:cloud.cloudfoundry.auctioneer.taskFailed

CF: Task placement failures

The number of tasks that the auctioneer failed to place on Diego cells.

Count
autovalue
builtin:cloud.cloudfoundry.http.badGateways

CF: 502 responses

The number of responses that indicate invalid service responses produced by an application.

Count
autovalue
builtin:cloud.cloudfoundry.http.latency

CF: Response latency

The average response time from the application to clients.

Millisecond
autoavgmaxmin

Openstack

Metric key
Name and description
Unit
Aggregations
builtin:cloud.openstack.vm.cpu.usage

CPU usage

Percent (%)
autoavgmaxmin
builtin:cloud.openstack.vm.disk.allocation

Disk allocation

Byte
autoavgmaxmin
builtin:cloud.openstack.vm.disk.capacity

Disk capacity

Byte
autoavgmaxmin
builtin:cloud.openstack.vm.memory.resident

Memory resident

Byte
autoavgmaxmin
builtin:cloud.openstack.vm.memory.usage

Memory usage

Byte
autoavgmaxmin
builtin:cloud.openstack.vm.net.rx

Network incoming bytes rate

Byte/second
autoavgmaxmin

VMware

Metric key
Name and description
Unit
Aggregations
builtin:cloud.vmware.hypervisor.cpu.usage

Host CPU usage %

Percent (%)
autoavgmaxmin
builtin:cloud.vmware.hypervisor.disk.usage

Host disk usage rate

kB/s
autoavgmaxmin
builtin:cloud.vmware.hypervisor.hostdisk.commandsAborted

Host disk commands aborted

Count
autovalue
builtin:cloud.vmware.hypervisor.hostdisk.queueLatency

Host disk queue latency

Millisecond
autoavgmaxmin
builtin:cloud.vmware.hypervisor.hostdisk.rIops

Host disk read IOPS

Per second
autoavgmaxmin
builtin:cloud.vmware.hypervisor.hostdisk.readLatency

Host disk read latency

Millisecond
autoavgmaxmin

Containers

CPU

Metric key
Name and description
Unit
Aggregations
builtin:containers.cpu.limit

Containers: CPU limit, mCores

CPU resource limit per container in millicores.

Millicores
autoavgmaxmin
builtin:containers.cpu.logicalCores

Containers: CPU logical cores

Number of logical CPU cores of the host.

Cores
autoavgmaxmin
builtin:containers.cpu.shares

Containers: CPU shares

Number of CPU shares allocated per container.

Count
autoavgmaxmin
builtin:containers.cpu.throttledMilliCores

Containers: CPU throttling, mCores

CPU throttling per container in millicores.

Millicores
autoavgmaxmin
builtin:containers.cpu.throttledTime

Containers: CPU throttled time, ns/min

Total amount of time a container has been throttled, in nanoseconds per minute.

Nanosecond/minute
autoavgmaxmin
builtin:containers.cpu.usageMilliCores

Containers: CPU usage, mCores

CPU usage per container in millicores

Millicores
autoavgmaxmin

Memory

Metric key
Name and description
Unit
Aggregations
builtin:containers.memory.cacheBytes

Containers: Memory cache, bytes

Page cache memory per container in bytes.

Byte
autoavgmaxmin
builtin:containers.memory.limitBytes

Containers: Memory limit, bytes

Memory limit per container in bytes. If no limit is set, this is an empty value.

Byte
autoavgmaxmin
builtin:containers.memory.limitPercent

Containers: Memory limit, % of physical memory

Percent memory limit per container relative to total physical memory. If no limit is set, this is an empty value.

Percent (%)
autoavg
builtin:containers.memory.outOfMemoryKills

Containers: Memory - out of memory kills

Number of out of memory kills for a container.

Count
autovalue
builtin:containers.memory.physicalTotalBytes

Containers: Memory - total physical memory, bytes

Total physical memory on the host in bytes.

Byte
autoavgmaxmin
builtin:containers.memory.residentSetBytes

Containers: Memory usage, bytes

Resident set size (Linux) or private working set size (Windows) per container in bytes.

Byte
autoavgmaxmin

Other containers metrics

Metric key
Name and description
Unit
Aggregations
builtin:containers.bytes_rx2

Container bytes received

Byte/second
autoavgcountmaxminsum
builtin:containers.bytes_tx2

Container bytes transmitted

Byte/second
autoavgcountmaxminsum
builtin:containers.cpu_usage2

Container cpu usage

Percent (%)
autoavgcountmaxminsum
builtin:containers.devicemapper_data_space_available

Devicemapper data space available

Byte
autoavgcountmaxminsum
builtin:containers.devicemapper_data_space_used

Devicemapper data space used

Byte
autoavgcountmaxminsum
builtin:containers.devicemapper_metadata_space_available

Devicemapper meta-data space available

Byte
autoavgcountmaxminsum

Dashboards

Other dashboards metrics

Metric key
Name and description
Unit
Aggregations
builtin:dashboards.viewCount

Dashboard view count

Count
autovalue

Infrastructure

Availability

Metric key
Name and description
Unit
Aggregations
builtin:host.availability.state

Host availability

Host availability state metric reported in 1 minute intervals

Count
autovalue

CPU

Metric key
Name and description
Unit
Aggregations
builtin:host.cpu.gcpu.usage

z/OS General CPU usage

The percent of the general-purpose central processor (GCP) used

Percent (%)
autoavgmaxmin
builtin:host.cpu.msu.avg

z/OS Rolling 4 hour MSU average

The 4h average of consumed million service units on this LPAR

MSU
autoavgmaxmin
builtin:host.cpu.msu.capacity

z/OS MSU capacity

The over all capacity of million service units on this LPAR

MSU
autoavgmaxmin
builtin:host.cpu.ziip.eligible

z/OS zIIP eligible time

The zIIP eligible time spent on the general-purpose central processor (GCP) after process start per minute

Second
autoavgmaxmin
builtin:host.cpu.entConfig

AIX Entitlement configured

Capacity Entitlement is the number of virtual processors assigned to the AIX partition. It’s measured in fractions of processor equal to 0.1 or 0.01. For more information about entitlement, see Assigning the appropriate processor entitled capacity in official IMB documentation.

Ratio
autoavgmaxmin
builtin:host.cpu.entc

AIX Entitlement used

Percentage of entitlement used. Capacity Entitlement is the number of virtual cores assigned to the AIX partition. See For more information about entitlement, see Assigning the appropriate processor entitled capacity in official IMB documentation.

Percent (%)
autoavgmaxmin

DNS

Metric key
Name and description
Unit
Aggregations
builtin:host.dns.errorCount

Number of DNS errors by type

The number of DNS errors by type

Count
autoavgcountmaxminsum
builtin:host.dns.orphanCount

Number of orphaned DNS responses

The number of orphaned DNS responses on the host

Count
autoavgcountmaxminsum
builtin:host.dns.queryCount

Number of DNS queries

The number of DNS queries on the host

Count
autoavgcountmaxminsum
builtin:host.dns.queryTime

DNS query time sum

The time of all DNS queries on the host

Millisecond
autoavgcountmaxminsum
builtin:host.dns.singleQueryTime

DNS query time

The average time of DNS query. Calculated with DNS query time sum divided by number of DNS queries for each host and dns server pair.

Millisecond
autoavgmaxmin
builtin:host.dns.singleQueryTimeByDnsIp

DNS query time by DNS server

The weighted average time of DNS query by DNS server ip. Calculated with DNS query time sum divided by number of DNS queries. It weights the result taking into account number of requests from each host.

Millisecond
autoavgmaxmin

Disk

Metric key
Name and description
Unit
Aggregations
builtin:host.disk.throughput.read

Disk throughput read

File system read throughput in bits per second

bit/s
autoavgmaxmin
builtin:host.disk.throughput.write

Disk throughput write

File system write throughput in bits per second

bit/s
autoavgmaxmin
builtin:host.disk.avail

Disk available

Amount of free space available for user in file system. On Linux and AIX it is free space available for unprivileged user. It doesn't contain part of free space reserved for the root.

Byte
autoavgmaxmin
builtin:host.disk.bytesRead

Disk read bytes per second

Speed of read from file system in bytes per second

Byte/second
autoavgmaxmin
builtin:host.disk.bytesWritten

Disk write bytes per second

Speed of write to file system in bytes per second

Byte/second
autoavgmaxmin
builtin:host.disk.free

Disk available %

Percentage of free space available for user in file system. On Linux and AIX it is % of free space available for unprivileged user. It doesn't contain part of free space reserved for the root.

Percent (%)
autoavgmaxmin

Handles

Metric key
Name and description
Unit
Aggregations
builtin:host.handles.fileDescriptorsMax

File descriptors max

Maximum amount of file descriptors for use

Count
autoavgmaxmin
builtin:host.handles.fileDescriptorsUsed

File descriptors used

Amount of file descriptors used

Count
autoavgmaxmin

Kernel threads

Metric key
Name and description
Unit
Aggregations
builtin:host.kernelThreads.blocked

AIX Kernel threads blocked

Length of the swap queue. The swap queue contains the threads ready to run but swapped out with the currently running threads

Count
autoavgmaxmin
builtin:host.kernelThreads.ioEventWait

AIX Kernel threads I/O event wait

Number of threads that are waiting for file system direct (cio) + Number of processes that are asleep waiting for buffered I/O

Count
autoavgmaxmin
builtin:host.kernelThreads.ioMessageWait

AIX Kernel threads I/O message wait

Number of threads that are sleeping and waiting for raw I/O operations at a particular time. Raw I/O operation allows applications to direct write to the Logical Volume Manager (LVM) layer

Count
autoavgmaxmin
builtin:host.kernelThreads.running

AIX Kernel threads runnable

Number of runnable threads (running or waiting for run time) (threads ready). The average number of runnable threads is seen in the first column of the vmstat command output

Count
autoavgmaxmin

Memory

Metric key
Name and description
Unit
Aggregations
builtin:host.mem.avail.bytes

Memory available

The amount of memory (RAM) available on the host. The memory that is available for allocation to new or existing processes. Available memory is an estimation of how much memory is available for use without swapping.

Byte
autoavgmaxmin
builtin:host.mem.avail.pct

Memory available %

The percentage of memory (RAM) available on the host. The memory that is available for allocation to new or existing processes. Available memory is an estimation of how much memory is available for use without swapping. Shows available memory as percentages.

Percent (%)
autoavgmaxmin
builtin:host.mem.avail.pfps

Page faults per second

The measure of the number of page faults per second on the monitored host. This value includes soft faults and hard faults.

Per second
autoavgmaxmin
builtin:host.mem.swap.avail

Swap available

The amount of swap memory or swap space (also known as paging, which is the on-disk component of the virtual memory system) available.

Byte
autoavgmaxmin
builtin:host.mem.swap.total

Swap total

Amount of total swap memory or total swap space (also known as paging, which is the on-disk component of the virtual memory system) for use.

Byte
autovalue
builtin:host.mem.swap.used

Swap used

The amount of swap memory or swap space (also known as paging, which is the on-disk component of the virtual memory system) used.

Byte
autoavgmaxmin

Network

Metric key
Name and description
Unit
Aggregations
builtin:host.net.nic.packets.dropped

NIC packets dropped

Network interface packets dropped on the host

Per second
autovalue
builtin:host.net.nic.packets.droppedRx

NIC received packets dropped

Network interface received packets dropped on the host

Per second
autoavgmaxmin
builtin:host.net.nic.packets.droppedTx

NIC sent packets dropped

Network interface sent packets dropped on the host

Per second
autoavgmaxmin
builtin:host.net.nic.packets.errors

NIC packet errors

Network interface packet errors on the host

Per second
autovalue
builtin:host.net.nic.packets.errorsRx

NIC received packet errors

Network interface received packet errors on a host

Per second
autoavgmaxmin
builtin:host.net.nic.packets.errorsTx

NIC sent packet errors

Network interface sent packet errors on the host

Per second
autoavgmaxmin

OS service

Metric key
Name and description
Unit
Aggregations
builtin:host.osService.availability

OS Service availability

This metric provides the status of the OS service. If the OS service is running, the OS module is reporting "1" as a value of the metric. In any other case, the metric has a value of "0"Note that this metric provides data only from Classic Windows services monitoring (supported only on Windows), currently replaced by the new OS Services monitoring. To learn more, see Classic Windows services monitoring.

Count
autoavgmaxmin

Processes

Metric key
Name and description
Unit
Aggregations
builtin:host.osProcessStats.osProcessCount

OS Process count

This metric shows an average number of processes, over one minute, running on the host. The reported number of processes is based on processes detected by the OS module, read in 10 seconds cycles.

Count
autoavgmaxmin
builtin:host.osProcessStats.pgiCount

PGI count

This metric shows the number of PGIs created by the OS module every minute. It includes every PGI, even those which are considered not important and are not reported to Dynatrace.

Count
autoavgmaxmin
builtin:host.osProcessStats.pgiReportedCount

Reported PGI count

This metric shows the number of PGIs created and reported by the OS module every minute. It includes only PGIs, which are considered important and reported to Dynatrace. Important PGIs are those in which OneAgent recognizes the technology, have open network ports, generate significant resource usage, or are created via Declarative process grouping rules. To learn what makes process important, see Which are the most important processes?

Count
autoavgmaxmin

z/OS

Metric key
Name and description
Unit
Aggregations
builtin:host.zos.gcpu_time

z/OS General CPU time

Total General CPU time per minute

Count
autoavgcountmaxminsum
builtin:host.zos.msu_hours

z/OS Consumed MSUs per SMF interval (SMF70EDT)

Number of consumed MSUs per SMF interval (SMF70EDT)

Count
autoavgcountmaxminsum
builtin:host.zos.ziip_time

z/OS zIIP time

Total zIIP time per minute

Count
autoavgcountmaxminsum
builtin:host.zos.ziip_usage

z/OS zIIP usage

Actively used zIIP as a percentage of available zIIP

Count
autoavgcountmaxminsum

Other infrastructure metrics

Metric key
Name and description
Unit
Aggregations
builtin:host.availability

Host availability %

Host availability %

Percent (%)
autoavg
builtin:host.uptime

Host uptime

Time since last host boot up. Requires OneAgent 1.259+. The metric is not supported for application-only OneAgent deployments.

Second
autoavgmaxmin

Kubernetes

Cluster

Metric key
Name and description
Unit
Aggregations
builtin:kubernetes.cluster.readyz

Kubernetes: Cluster readyz status

Current status of the Kubernetes API server reported by the /readyz endpoint (0 or 1).

autoavgmaxmin

Container

Metric key
Name and description
Unit
Aggregations
builtin:kubernetes.container.oom_kills

Kubernetes: Container - out of memory (OOM) kill count

This metric measures the out of memory (OOM) kills. The most detailed level of aggregation is container. The value corresponds to the status 'OOMKilled' of a container in the pod resource's container status. The metric is only written if there was at least one container OOM kill.

Count
autovalue
builtin:kubernetes.container.restarts

Kubernetes: Container - restart count

This metric measures the amount of container restarts. The most detailed level of aggregation is container. The value corresponds to the delta of the 'restartCount' defined in the pod resource's container status. The metric is only written if there was at least one container restart.

Count
autovalue

Node

Metric key
Name and description
Unit
Aggregations
builtin:kubernetes.node.conditions

Kubernetes: Node conditions

This metric describes the status of a Kubernetes node. The most detailed level of aggregation is node.

Count
autoavgmaxmin
builtin:kubernetes.node.cpu_allocatable

Kubernetes: Node - CPU allocatable

This metric measures the total allocatable cpu. The most detailed level of aggregation is node. The value corresponds to the allocatable cpu of a node.

Millicores
autoavgmaxmin
builtin:kubernetes.node.cpu_throttled

Kubernetes: Container - CPU throttled (by node)

This metric measures the total CPU throttling by container. The most detailed level of aggregation is node.

Millicores
autoavgmaxmin
builtin:kubernetes.node.cpu_usage

Kubernetes: Container - CPU usage (by node)

This metric measures the total CPU consumed (user usage + system usage) by container. The most detailed level of aggregation is node.

Millicores
autoavgmaxmin
builtin:kubernetes.node.limits_cpu

Kubernetes: Pod - CPU limits (by node)

This metric measures the cpu limits. The most detailed level of aggregation is node. The value is the sum of the cpu limits of all app containers of a pod.

Millicores
autoavgmaxmin
builtin:kubernetes.node.limits_memory

Kubernetes: Pod - memory limits (by node)

This metric measures the memory limits. The most detailed level of aggregation is node. The value is the sum of the memory limits of all app containers of a pod.

Byte
autoavgmaxmin

Persistentvolumeclaim

Metric key
Name and description
Unit
Aggregations
builtin:kubernetes.persistentvolumeclaim.available

Kubernetes: PVC - available

This metric measures the number of available bytes in the volume. The most detailed level of aggregation is persistent volume claim.

Byte
autoavgmaxmin
builtin:kubernetes.persistentvolumeclaim.capacity

Kubernetes: PVC - capacity

This metric measures the capacity in bytes of the volume. The most detailed level of aggregation is persistent volume claim.

Byte
autoavgmaxmin
builtin:kubernetes.persistentvolumeclaim.used

Kubernetes: PVC - used

This metric measures the number of used bytes in the volume. The most detailed level of aggregation is persistent volume claim.

Byte
autoavgmaxmin

Resource Quota

Metric key
Name and description
Unit
Aggregations
builtin:kubernetes.resourcequota.limits_cpu

Kubernetes: Resource quota - CPU limits

This metric measures the cpu limit quota. The most detailed level of aggregation is resource quota. The value corresponds to the cpu limits of a resource quota.

Millicores
autoavgmaxmin
builtin:kubernetes.resourcequota.limits_cpu_used

Kubernetes: Resource quota - CPU limits used

This metric measures the used cpu limit quota. The most detailed level of aggregation is resource quota. The value corresponds to the used cpu limits of a resource quota.

Millicores
autoavgmaxmin
builtin:kubernetes.resourcequota.limits_memory

Kubernetes: Resource quota - memory limits

This metric measures the memory limit quota. The most detailed level of aggregation is resource quota. The value corresponds to the memory limits of a resource quota.

Byte
autoavgmaxmin
builtin:kubernetes.resourcequota.limits_memory_used

Kubernetes: Resource quota - memory limits used

This metric measures the used memory limits quota. The most detailed level of aggregation is resource quota. The value corresponds to the used memory limits of a resource quota.

Byte
autoavgmaxmin
builtin:kubernetes.resourcequota.pods

Kubernetes: Resource quota - pod count

This metric measures the pods quota. The most detailed level of aggregation is resource quota. The value corresponds to the pods of a resource quota.

Count
autoavgmaxmin
builtin:kubernetes.resourcequota.pods_used

Kubernetes: Resource quota - pod used count

This metric measures the used pods quota. The most detailed level of aggregation is resource quota. The value corresponds to the used pods of a resource quota.

Count
autoavgmaxmin

Workload

Metric key
Name and description
Unit
Aggregations
builtin:kubernetes.workload.conditions

Kubernetes: Workload conditions

This metric describes the status of a Kubernetes workload. The most detailed level of aggregation is workload.

Count
autoavgmaxmin
builtin:kubernetes.workload.containers_desired

Kubernetes: Pod - desired container count

This metric measures the number of desired containers. The most detailed level of aggregation is workload. The value is the count of all containers in the pod's specification.

Count
autoavgmaxmin
builtin:kubernetes.workload.cpu_throttled

Kubernetes: Container - CPU throttled (by workload)

This metric measures the total CPU throttling by container. The most detailed level of aggregation is workload.

Millicores
autoavgmaxmin
builtin:kubernetes.workload.cpu_usage

Kubernetes: Container - CPU usage (by workload)

This metric measures the total CPU consumed (user usage + system usage) by container. The most detailed level of aggregation is workload.

Millicores
autoavgmaxmin
builtin:kubernetes.workload.limits_cpu

Kubernetes: Pod - CPU limits (by workload)

This metric measures the cpu limits. The most detailed level of aggregation is workload. The value is the sum of the cpu limits of all app containers of a pod.

Millicores
autoavgmaxmin
builtin:kubernetes.workload.limits_memory

Kubernetes: Pod - memory limits (by workload)

This metric measures the memory limits. The most detailed level of aggregation is workload. The value is the sum of the memory limits of all app containers of a pod.

Byte
autoavgmaxmin

Other kubernetes metrics

Metric key
Name and description
Unit
Aggregations
builtin:kubernetes.containers

Kubernetes: Container count

This metric measures the number of containers. The most detailed level of aggregation is workload. The metric counts the number of all containers.

Count
autoavgmaxmin
builtin:kubernetes.events

Kubernetes: Event count

This metric counts Kubernetes events. The most detailed level of aggregation is the event reason. The value corresponds to the count of events returned by the Kubernetes events endpoint. This metric depends on Kubernetes event monitoring. It will not show any datapoints for the period in which event monitoring is deactivated.

Count
autovalue
builtin:kubernetes.nodes

Kubernetes: Node count

This metric measures the number of nodes. The most detailed level of aggregation is cluster. The value is the count of all nodes.

Count
autoavgmaxmin
builtin:kubernetes.pods

Kubernetes: Pod count (by workload)

This metric measures the number of pods. The most detailed level of aggregation is workload. The value corresponds to the count of all pods.

Count
autoavgmaxmin
builtin:kubernetes.workloads

Kubernetes: Workload count

This metric measures the number of workloads. The most detailed level of aggregation is namespace. The value corresponds to the count of all workloads.

Count
autoavgmaxmin

Process

Availability

Metric key
Name and description
Unit
Aggregations
builtin:pgi.availability.state

Process availability

Process availability state metric reported in 1 minute intervals

Count
autovalue

Other process metrics

Metric key
Name and description
Unit
Aggregations
builtin:pgi.availability

Process availability %

This metric provides the percentage of time when a process is available. It is sent once per minute with a 10-second granularity - six samples are aggregated every minute. If the process is available for a whole minute, the value is 100%. A 0% value indicates that it is not running. It has a "Process" dimension (dt.entity.process_group_instance).

Percent (%)
autoavg

Process

Other process metrics

Metric key
Name and description
Unit
Aggregations
builtin:process.bytesReceived

Process traffic in

This metric provides size of incoming traffic of a process. It helps to identify processes generating high network traffic on a host. The result is expressed in kilobytes. It has a "PID" (process.pid), "Parent PID" (process.parent_pid), "process owner" (process.owner), "process executable name" (process.executable.name), "process executable path" (process.executable.path), "process command line" (process.command_line) and "Process group instance" (dt.entity.process_group_instance) dimensions This metric is collected only if the Process instance snapshot feature is turned on and triggered, and the time this metric is collected for is restricted to feature limits. To learn more, see Process instance snapshots.

kB
autoavgcountmaxminsum
builtin:process.bytesSent

Process traffic out

This metric provides size of outgoing traffic of a process. It helps to identify processes generating high network traffic on a host. The result is expressed in kilobytes. It has a "PID" (process.pid), "Parent PID" (process.parent_pid), "process owner" (process.owner), "process executable name" (process.executable.name), "process executable path" (process.executable.path), "process command line" (process.command_line) and "Process group instance" (dt.entity.process_group_instance) dimensions This metric is collected only if the Process instance snapshot feature is turned on and triggered, and the time this metric is collected for is restricted to feature limits. To learn more, see Process instance snapshots.

kB
autoavgcountmaxminsum
builtin:process.cpu

Process average CPU

This metric provides the percentage of the CPU usage of a process. The metric value is the sum of CPU time every process worker uses divided by the total available CPU time. The result is expressed in percentage. A value of 100% indicates that the process uses all available CPU resources of the host. It has a "PID" (process.pid), "Parent PID" (process.parent_pid), "process owner" (process.owner), "process executable name" (process.executable.name), "process executable path" (process.executable.path), "process command line" (process.command_line) and "Process group instance" (dt.entity.process_group_instance) dimensions. This metric is collected only if the Process instance snapshot feature is turned on and triggered, and the time this metric is collected for is restricted to feature limits. To learn more, see Process instance snapshots.

Percent (%)
autoavgcountmaxminsum
builtin:process.memory

Process memory

This metric provides the memory usage of a process. It helps to identify processes with high memory resource consumption and memory leaks. The result is expressed in bytes. It has a "PID" (process.pid), "Parent PID" (process.parent_pid), "process owner" (process.owner), "process executable name" (process.executable.name), "process executable path" (process.executable.path), "process command line" (process.command_line) and "Process group instance" (dt.entity.process_group_instance) dimensions This metric is collected only if the Process instance snapshot feature is turned on and triggered, and the time this metric is collected for is restricted to feature limits. To learn more, see Process instance snapshots.

Byte
autoavgcountmaxminsum

Queue

Other queue metrics

Metric key
Name and description
Unit
Aggregations
builtin:queue.incoming_requests

Incoming messages

The number of incoming messages on the queue or topic

Count
autoavgcountmaxminsum
builtin:queue.outgoing_requests

Outgoing messages

The number of outgoing messages from the queue or topic

Count
autoavgcountmaxminsum

Security

Attack

Metric key
Name and description
Unit
Aggregations
builtin:security.attack.new

New attacks

Number of attacks that were recently created. The metric supports the management zone selector.

Count
autovalue

Security problems

Metric key
Name and description
Unit
Aggregations
builtin:security.securityProblem.muted.new.global

New Muted Security Problems (global)

Number of vulnerabilities that were recently muted. The metric value is independent of any configured management zone (and thus global).

Count
autovalue
builtin:security.securityProblem.open.new.global

New Open Security Problems (global)

Number of vulnerabilities that were recently created. The metric value is independent of any configured management zone (and thus global).

Count
autovalue
builtin:security.securityProblem.open.new.managementZone

New Open Security Problems (split by Management Zone)

Number of vulnerabilities that were recently created. The metric value is split by management zone.

Count
autovalue
builtin:security.securityProblem.open.global

Open Security Problems (global)

Number of currently open vulnerabilities seen within the last minute. The metric value is independent of any configured management zone (and thus global).

Count
autoavgmaxmin
builtin:security.securityProblem.open.managementZone

Open Security Problems (split by Management Zone)

Number of currently open vulnerabilities seen within the last minute. The metric value is split by management zone.

Count
autoavgmaxmin
builtin:security.securityProblem.resolved.new.global

New Resolved Security Problems (global)

Number of vulnerabilities that were recently resolved. The metric value is independent of any configured management zone (and thus global).

Count
autovalue

Vulnerabilities

Metric key
Name and description
Unit
Aggregations
builtin:security.vulnerabilities.global.countAffectedProcessGroups.all

Vulnerabilities - affected process groups count (global)

Total number of unique affected process groups across all open vulnerabilities per technology. The metric value is independent of any configured management zone (and thus global).

Count
autoavgmaxmin
builtin:security.vulnerabilities.global.countAffectedProcessGroups.notMuted

Vulnerabilities - affected not-muted process groups count (global)

Total number of unique affected process groups across all open, unmuted vulnerabilities per technology. The metric value is independent of any configured management zone (and thus global).

Count
autoavgmaxmin
builtin:security.vulnerabilities.countAffectedEntities

Vulnerabilities - affected entities count

Total number of unique affected entities across all open vulnerabilities. The metric supports the management zone selector.

Count
autovalue

Services

CPU

Metric key
Name and description
Unit
Aggregations
builtin:service.cpu.perRequest

CPU time

CPU time consumed by a particular request. To learn how Dynatrace calculates service timings, see Service analysis timings.

Microsecond
autoavgcountmaxminsum
builtin:service.cpu.time

Service CPU time

CPU time consumed by a particular service. To learn how Dynatrace calculates service timings, see Service analysis timings.

Microsecond
autovalue

Database connections

Metric key
Name and description
Unit
Aggregations
builtin:service.dbconnections.failure

Failed connections

Unsuccessful connection attempts compared to all connection attempts. To learn about database analysis, see Analyze database services.

Count
autovalue
builtin:service.dbconnections.failureRate

Connection failure rate

Rate of unsuccessful connection attempts compared to all connection attempts. To learn about database analysis, see Analyze database services.

Percent (%)
autovalue
builtin:service.dbconnections.success

Successful connections

Total number of database connections successfully established by this service. To learn about database analysis, see Analyze database services.

Count
autovalue
builtin:service.dbconnections.successRate

Connection success rate

Rate of successful connection attempts compared to all connection attempts. To learn about database analysis, see Analyze database services.

Percent (%)
autovalue
builtin:service.dbconnections.total

Total number of connections

Total number of database connections that were attempted to be established by this service. To learn about database analysis, see Analyze database services.

Count
autovalue

Errors

Metric key
Name and description
Unit
Aggregations
builtin:service.errors.client.count

Number of client side errors

Failed requests for a service measured on client side. To learn about failure detection, see Configure service failure detection.

Count
autovalue
builtin:service.errors.client.rate

Failure rate (client side errors)

Percent (%)
autoavg
builtin:service.errors.client.successCount

Number of calls without client side errors

Count
autovalue
builtin:service.errors.fivexx.count

Number of HTTP 5xx errors

HTTP requests with a status code between 500 and 599 for a given key request measured on server side. To learn about failure detection, see Configure service failure detection.

Count
autovalue
builtin:service.errors.fivexx.rate

Failure rate (HTTP 5xx errors)

Percent (%)
autoavg
builtin:service.errors.fivexx.successCount

Number of calls without HTTP 5xx errors

Count
autovalue

Key requests

Metric key
Name and description
Unit
Aggregations
builtin:service.keyRequest.count.client

Request count - client

Number of requests for a given key request - measured on the client side. This metric is written for each key request. To learn more about key requests, see Monitor key request.

Count
autovalue
builtin:service.keyRequest.count.server

Request count - server

Number of requests for a given key request - measured on the server side. This metric is written for each key request. To learn more about key requests, see Monitor key request.

Count
autovalue
builtin:service.keyRequest.count.total

Request count

Number of requests for a given key request. This metric is written for each key request. To learn more about key requests, see Monitor key request.

Count
autovalue
builtin:service.keyRequest.cpu.perRequest

CPU per request

CPU time for a given key request. This metric is written for each key request. To learn more about key requests, see Monitor key request.

Microsecond
autoavgcountmaxminsum
builtin:service.keyRequest.cpu.time

Service key request CPU time

CPU time for a given key request. This metric is written for each key request. To learn more about key requests, see Monitor key request.

Microsecond
autoavgcountmaxminsum
builtin:service.keyRequest.errors.client.count

Number of client side errors

Failed requests for a given key request measured on client side. To learn about failure detection, see Configure service failure detection.

Count
autovalue

Request

Metric key
Name and description
Unit
Aggregations
builtin:service.request.service_mesh.count

Unified service mesh request count

Number of service mesh requests received by a given service. To learn how Dynatrace detects services, see Service detection and naming.

Count
autovalue
builtin:service.request.service_mesh.count_service_aggregation

Unified service mesh request count (by service)

Number of service mesh requests received by a given service. Reduced dimensions for faster charting. To learn how Dynatrace detects services, see Service detection and naming.

Count
autovalue
builtin:service.request.service_mesh.failure_count

Unified service mesh request failure count

Number of failed service mesh requests received by a given service. To learn how Dynatrace detects service failures, see Configure service failure detection.

Count
autovalue
builtin:service.request.service_mesh.failure_count_service_aggregation

Unified service mesh request failure count (by service)

Number of failed service mesh requests received by a given service. Reduced dimensions for faster charting. To learn how Dynatrace detects service failures, see Configure service failure detection.

Count
autovalue
builtin:service.request.service_mesh.response_time

Unified service mesh request response time

Response time of a service mesh ingress measured in microseconds. To learn how Dynatrace calculates service timings, see Service analysis timings.

Millisecond
autocountmaxmedianminpercentile
builtin:service.request.service_mesh.response_time_service_aggregation

Unified service mesh request response time (by service)

Response time of a service mesh ingress measured in microseconds. Reduced dimensions for faster charting. To learn how Dynatrace calculates service timings, see Service analysis timings.

Millisecond
autocountmaxmedianminpercentile

Request count

Metric key
Name and description
Unit
Aggregations
builtin:service.requestCount.client

Request count - client

Number of requests received by a given service - measured on the client side. This metric allows service splittings. To learn how Dynatrace detects and analyzes services, see Services.

Count
autovalue
builtin:service.requestCount.server

Request count - server

Number of requests received by a given service - measured on the server side. This metric allows service splittings. To learn how Dynatrace detects and analyzes services, see Services.

Count
autovalue
builtin:service.requestCount.total

Request count

Number of requests received by a given service. This metric allows service splittings. To learn how Dynatrace detects and analyzes services, see Services.

Count
autovalue

Response time

Metric key
Name and description
Unit
Aggregations
builtin:service.response.group.client

Client side response time

Response time for a given key request per request type - measured on the client side. This metric is written for each key request. To learn more about key requests, see Monitor key request.

Microsecond
autoavgcountmaxmedianminpercentilesum
builtin:service.response.group.server

Server side response time

Response time for a given key request per request type - measured on the server side. This metric is written for each key request. To learn more about key requests, see Monitor key request.

Microsecond
autoavgcountmaxmedianminpercentilesum
builtin:service.response.client

Client side response time

Microsecond
autoavgcountmaxmedianminpercentilesum
builtin:service.response.server

Server side response time

Microsecond
autoavgcountmaxmedianminpercentilesum
builtin:service.response.time

Response time

Time consumed by a particular service until a response is sent back to the calling application, process, service etc.To learn how Dynatrace calculates service timings, see Service analysis timings.

Microsecond
autoavgcountmaxmedianminpercentilesum

Success rate

Metric key
Name and description
Unit
Aggregations
builtin:service.successes.server.rate

Success rate (server side)

Percent (%)
autoavg

Total processing time

Metric key
Name and description
Unit
Aggregations
builtin:service.totalProcessingTime.group.totalProcessingTime

Total processing time

Total time consumed by a particular request type including asynchronous processing. Time includes the factor that asynchronous processing can still occur after responses are sent. To learn how Dynatrace calculates service timings, see Service analysis timings.

Microsecond
autoavgcountmaxmedianminpercentilesum

Other services metrics

Metric key
Name and description
Unit
Aggregations
builtin:service.totalProcessingTime

Total processing time

Total time consumed by a particular service including asynchronous processing. Time includes the factor that asynchronous processing can still occur after responses are sent.To learn how Dynatrace calculates service timings, see Service analysis timings.

Microsecond
autoavgcountmaxmedianminpercentilesum
builtin:service.dbChildCallCount

Number of calls to databases

Count
autovalue
builtin:service.dbChildCallTime

Time spent in database calls

Microsecond
autovalue
builtin:service.ioTime

IO time

Microsecond
autovalue
builtin:service.lockTime

Lock time

Microsecond
autovalue
builtin:service.nonDbChildCallCount

Number of calls to other services

Count
autovalue

Synthetic monitoring

Browser

Metric key
Name and description
Unit
Aggregations
builtin:synthetic.browser.actionDuration.custom

Action duration - custom action [browser monitor]

The duration of custom actions; split by monitor.

Millisecond
autoavgcountmaxmedianminpercentilesum
builtin:synthetic.browser.actionDuration.custom.geo

Action duration - custom action (by geolocation) [browser monitor]

The duration of custom actions; split by monitor, geolocation.

Millisecond
autoavgcountmaxminsum
builtin:synthetic.browser.actionDuration.load

Action duration - load action [browser monitor]

The duration of load actions; split by monitor.

Millisecond
autoavgcountmaxmedianminpercentilesum
builtin:synthetic.browser.actionDuration.load.geo

Action duration - load action (by geolocation) [browser monitor]

The duration of load actions; split by monitor, geolocation.

Millisecond
autoavgcountmaxminsum
builtin:synthetic.browser.actionDuration.xhr

Action duration - XHR action [browser monitor]

The duration of XHR actions; split by monitor.

Millisecond
autoavgcountmaxmedianminpercentilesum
builtin:synthetic.browser.actionDuration.xhr.geo

Action duration - XHR action (by geolocation) [browser monitor]

The duration of XHR actions; split by monitor, geolocation.

Millisecond
autoavgcountmaxminsum

HTTP

Metric key
Name and description
Unit
Aggregations
builtin:synthetic.http.availability.location.total

Availability rate (by location) [HTTP monitor]

The availability rate of HTTP monitors.

Percent (%)
autoavg
builtin:synthetic.http.availability.location.totalWoMaintenanceWindow

Availability rate - excl. maintenance windows (by location) [HTTP monitor]

The availability rate of HTTP monitors excluding maintenance windows.

Percent (%)
autoavg
builtin:synthetic.http.dns.geo

DNS lookup time (by location) [HTTP monitor]

The time taken to resolve the hostname for a target URL for the sum of all requests.

Millisecond
autoavgcountmaxminsum
builtin:synthetic.http.duration.geo

Duration (by location) [HTTP monitor]

The duration of the sum of all requests.

Millisecond
autoavgcountmaxminsum
builtin:synthetic.http.execution.status

Execution count (by status) [HTTP monitor]

The number of monitor executions.

Count
autovalue
builtin:synthetic.http.request.dns.geo

DNS lookup time (by request, location) [HTTP monitor]

The time taken to resolve the hostname for a target URL for individual HTTP requests.

Millisecond
autoavgcountmaxminsum

Location

Metric key
Name and description
Unit
Aggregations
builtin:synthetic.location.node.component.healthStatus

Node health status count [synthetic]

The number of private Synthetic nodes and their health status.

Count
autoavgcountmaxminsum
builtin:synthetic.location.healthStatus

Private location health status count [synthetic]

The number of private Synthetic locations and their health status.

Count
autoavgcountmaxminsum

MultiProtocol

Metric key
Name and description
Unit
Aggregations
builtin:synthetic.multiProtocol.availability

Monitor availability [Network Availability monitor]

Count
autoavgcountmaxminsum
builtin:synthetic.multiProtocol.availability.excludingMaintenanceWindows

Monitor availability excluding maintenance windows [Network Availability monitor]

Count
autoavgcountmaxminsum
builtin:synthetic.multiProtocol.dns.resolutionTime

DNS request resolution time [Network Availability request]

Millisecond
autoavgcountmaxminsum
builtin:synthetic.multiProtocol.icmp.packetsReceived

Number of successful ICMP packets [Network Availability request]

Count
autoavgcountmaxminsum
builtin:synthetic.multiProtocol.icmp.packetsSent

Number of ICMP packets [Network Availability request]

Count
autoavgcountmaxminsum
builtin:synthetic.multiProtocol.icmp.requestExecutionTime

ICMP request execution time [Network Availability request]

Millisecond
autoavgcountmaxminsum

Third party

Metric key
Name and description
Unit
Aggregations
builtin:synthetic.external.availability.location.total

Availability rate (by location) [third-party monitor]

The availability rate of third-party monitors.

Percent (%)
autoavg
builtin:synthetic.external.availability.location.totalWoMaintenanceWindow

Availability rate - excl. maintenance windows (by location) [third-party monitor]

The availability rate of third-party monitors excluding maintenance windows.

Percent (%)
autoavg
builtin:synthetic.external.errorDetails

Error count [third-party monitor]

The number of detected errors; split by monitor, step, error code.

Count
autovalue
builtin:synthetic.external.errorDetails.geo

Error count (by location) [third-party monitor]

The number of detected errors; split by monitor, location, step, error code.

Count
autovalue
builtin:synthetic.external.quality

Test quality rate [third-party monitor]

The test quality rate. Calculated by dividing successful steps by the total number of steps executed; split by monitor.

Percent (%)
autoavgmaxmin
builtin:synthetic.external.quality.geo

Test quality rate (by location) [third-party monitor]

The test quality rate. Calculated by dividing successful steps by the total number of steps executed; split by monitor, location.

Percent (%)
autoavgmaxmin

Technologies

.NET

Metric key
Name and description
Unit
Aggregations
builtin:tech.dotnet.gc.gen0Collections

.NET garbage collection (# Gen 0)

Number of completed GC runs that collected objects in Gen0 Heap within the given time range, https://dt-url.net/i1038bq

Count
autovalue
builtin:tech.dotnet.gc.gen1Collections

.NET garbage collection (# Gen 1)

Number of completed GC runs that collected objects in Gen1 Heap within the given time range, https://dt-url.net/i1038bq

Count
autovalue
builtin:tech.dotnet.gc.gen2Collections

.NET garbage collection (# Gen 2)

Number of completed GC runs that collected objects in Gen2 Heap within the given time range, https://dt-url.net/i1038bq

Count
autovalue
builtin:tech.dotnet.gc.timePercentage

.NET % time in GC

Percentage time spend within garbage collection

Percent (%)
autoavgmaxmin
builtin:tech.dotnet.jit.timePercentage

.NET % time in JIT

.NET % time in Just in Time compilation

Percent (%)
autoavgmaxmin
builtin:tech.dotnet.managedThreads.avgNumOfActiveThreads

.NET average number of active threads

Count
autoavgmaxmin

Apache Hadoop

Metric key
Name and description
Unit
Aggregations
builtin:tech.Hadoop.hdfs.BlocksTotal

Blocks number

Count
autoavgcountmaxminsum
builtin:tech.Hadoop.hdfs.CacheCapacity

Cache capacity

Count
autoavgcountmaxminsum
builtin:tech.Hadoop.hdfs.CacheUsed

Cache used

Count
autoavgcountmaxminsum
builtin:tech.Hadoop.hdfs.CapacityRemaining

Remaining capacity

Count
autoavgcountmaxminsum
builtin:tech.Hadoop.hdfs.CapacityTotal

Total capacity

Count
autoavgcountmaxminsum
builtin:tech.Hadoop.hdfs.CapacityUsed

Used capacity

Count
autoavgcountmaxminsum

Apache Tomcat

Metric key
Name and description
Unit
Aggregations
builtin:tech.tomcat.connectionPool.maxActive

Max active

Count
autoavgcountmaxminsum
builtin:tech.tomcat.connectionPool.maxActiveGlobal

Max active (global)

Count
autoavgcountmaxminsum
builtin:tech.tomcat.connectionPool.maxTotal

Max total

Count
autoavgcountmaxminsum
builtin:tech.tomcat.connectionPool.maxTotalGlobal

Max total (global)

Count
autoavgcountmaxminsum
builtin:tech.tomcat.connectionPool.numActive

Num active

Count
autoavgcountmaxminsum
builtin:tech.tomcat.connectionPool.numActiveGlobal

Num active (global)

Count
autoavgcountmaxminsum

Couchbase

Metric key
Name and description
Unit
Aggregations
builtin:tech.couchbase.cluster.basicStats.diskFetches

cluster basicStats diskFetches

Count
autoavgcountmaxminsum
builtin:tech.couchbase.cluster.count.membase

cluster count membase

Count
autoavgcountmaxminsum
builtin:tech.couchbase.cluster.count.memcached

cluster count memcached

Count
autoavgcountmaxminsum
builtin:tech.couchbase.cluster.samples.cmd_get

cluster samples cmd_get

Per second
autoavgcountmaxminsum
builtin:tech.couchbase.cluster.samples.cmd_set

cluster samples cmd_set

Per second
autoavgcountmaxminsum
builtin:tech.couchbase.cluster.samples.curr_items

cluster samples curr_items

Count
autoavgcountmaxminsum

Custom device

Metric key
Name and description
Unit
Aggregations
builtin:tech.customDevice.count

Custom Device Count

Count
autovalue
Metric key
Name and description
Unit
Aggregations
builtin:tech.elasticsearch.local.indices.docs.count

Documents count

Count
autoavgcountmaxminsum
builtin:tech.elasticsearch.local.indices.docs.deleted

Deleted documents

Count
autoavgcountmaxminsum
builtin:tech.elasticsearch.local.indices.fielddata.evictions

Field data evictions

Count
autoavgcountmaxminsum
builtin:tech.elasticsearch.local.indices.fielddata.memory_size_in_bytes

Field data size

Byte
autoavgcountmaxminsum
builtin:tech.elasticsearch.local.indices.query_cache.cache_count

Query cache count

Count
autoavgcountmaxminsum
builtin:tech.elasticsearch.local.indices.query_cache.cache_size

Query cache size

Byte
autoavgcountmaxminsum

Generic

Metric key
Name and description
Unit
Aggregations
builtin:tech.generic.cpu.groupSuspensionTime

Process group total CPU time during GC suspensions

This metric provides statistics about CPU usage for process groups of garbage-collected technologies. The metric value is the sum of CPU time used during garbage collector suspensions for every process (including its workers) in a process group. It has a "Process Group" dimension.

Microsecond
autovalue
builtin:tech.generic.cpu.groupTotalTime

Process group total CPU time

This metric provides the total CPU time used by a process group. The metric value is the sum of CPU time every process (including its workers) of the process group uses. The result is expressed in microseconds. It can help to identify the most CPU-intensive technologies in the monitored environment. It has a "Process Group" dimension.

Microsecond
autovalue
builtin:tech.generic.cpu.suspensionTime

Process total CPU time during GC suspensions

This metric provides statistics about CPU usage for garbage-collected processes. The metric value is the sum of CPU time used during garbage collector suspensions for all process workers. It has a "Process" dimension (dt.entity.process_group_instance).

Microsecond
autovalue
builtin:tech.generic.cpu.totalTime

Process total CPU time

This metric provides the CPU time used by a process. The metric value is the sum of CPU time every process worker uses. The result is expressed in microseconds. It has a "Process" dimension (dt.entity.process_group_instance).

Microsecond
autovalue
builtin:tech.generic.cpu.usage

Process CPU usage

This metric provides the percentage of the CPU usage of a process. The metric value is the sum of CPU time every process worker uses divided by the total available CPU time. The result is expressed in percentage. A value of 100% indicates that the process uses all available CPU resources of the host. It has a "Process" dimension (dt.entity.process_group_instance).

Percent (%)
autoavgmaxmin
builtin:tech.generic.gcpu.time

z/OS General CPU time

The time spent on the general-purpose central processor (GCP) after process start per minute

Second
autoavgmaxmin

Go

Metric key
Name and description
Unit
Aggregations
builtin:tech.go.http.badGateways

Go: 502 responses

The number of responses that indicate invalid service responses produced by an application.

Count
autovalue
builtin:tech.go.http.latency

Go: Response latency

The average response time from the application to clients.

Millisecond
autoavgmaxmin
builtin:tech.go.http.responses5xx

Go: 5xx responses

The number of responses that indicate repeatedly crashing apps or response issues from applications.

Count
autovalue
builtin:tech.go.http.totalRequests

Go: Total requests

The number of all requests representing the overall traffic flow.

Count
autovalue
builtin:tech.go.memory.heap.idle

Go: Heap idle size

The amount of memory not assigned to the heap or stack. Idle memory can be returned to the operating system or retained by the Go runtime for later reassignment to the heap or stack.

Byte
autoavgmaxmin
builtin:tech.go.memory.heap.live

Go: Heap live size

The amount of memory considered live by the Go garbage collector. This metric accumulates memory retained by the most recent garbage collector run and allocated since then.

Byte
autoavgmaxmin

JVM

Metric key
Name and description
Unit
Aggregations
builtin:tech.jvm.classes.loaded

JVM loaded classes

The number of classes that are currently loaded in the Java virtual machine, https://dt-url.net/l2c34jw

Count
autoavgmaxmin
builtin:tech.jvm.classes.total

JVM total number of loaded classes

The total number of classes that have been loaded since the Java virtual machine has started execution, https://dt-url.net/d0y347x

Count
autoavgmaxmin
builtin:tech.jvm.classes.unloaded

JVM unloaded classes

The total number of classes unloaded since the Java virtual machine has started execution, https://dt-url.net/d7g34bi

Count
autoavgmaxmin
builtin:tech.jvm.memory.gc.activationCount

Garbage collection total activation count

The total number of collections that have occurred for all pools, https://dt-url.net/oz834vd

Count
autovalue
builtin:tech.jvm.memory.gc.collectionTime

Garbage collection total collection time

The approximate accumulated collection elapsed time in milliseconds for all pools, https://dt-url.net/oz834vd

Millisecond
autovalue
builtin:tech.jvm.memory.gc.suspensionTime

Garbage collection suspension time

Time spent in milliseconds between GC pause starts and GC pause ends, https://dt-url.net/zj434js

Percent (%)
autoavgmaxmin

Kafka

Metric key
Name and description
Unit
Aggregations
builtin:tech.kafka.pg.kafka.controller.ControllerStats.LeaderElectionRateAndTimeMs.OneMinuteRate

Kafka broker - Leader election rate

Millisecond
autoavgcountmaxminsum
builtin:tech.kafka.pg.kafka.controller.ControllerStats.UncleanLeaderElectionsPerSec.OneMinuteRate

Kafka broker - Unclean election rate

Per second
autoavgcountmaxminsum
builtin:tech.kafka.pg.kafka.controller.KafkaController.ActiveControllerCount.Value

Kafka controller - Active cluster controllers

Count
autoavgcountmaxminsum
builtin:tech.kafka.pg.kafka.controller.KafkaController.OfflinePartitionsCount.Value

Kafka controller - Offline partitions

Count
autoavgcountmaxminsum
builtin:tech.kafka.pg.kafka.server.ReplicaManager.PartitionCount.Value

Kafka broker - Partitions

Count
autoavgcountmaxminsum
builtin:tech.kafka.pg.kafka.server.ReplicaManager.UnderReplicatedPartitions.Value

Kafka broker - Under replicated partitions

Count
autoavgcountmaxminsum

Nettracer

Metric key
Name and description
Unit
Aggregations
builtin:tech.nettracer.bytes_rx

Bytes received

Bytes received

Byte
autoavgcountmaxminsum
builtin:tech.nettracer.bytes_tx

Bytes transmitted

Bytes transmitted

Byte
autoavgcountmaxminsum
builtin:tech.nettracer.pkts_retr

Retransmitted packets

Number of retransmitted packets

Count
autovalue
builtin:tech.nettracer.pkts_rx

Packets received

Number of packets received

Count
autovalue
builtin:tech.nettracer.pkts_tx

Packets transmitted

Number of packets transmitted

Count
autovalue
builtin:tech.nettracer.retr_percentage

Retransmission

Percentage of retransmitted packets

Percent (%)
autoavgmaxmin

Nginx

Metric key
Name and description
Unit
Aggregations
builtin:tech.nginx.cache.freeSpace

Nginx Plus cache free space

MB
autoavgmaxmin
builtin:tech.nginx.cache.hitRatio

Nginx Plus cache hit ratio

Percent (%)
autoavgmaxmin
builtin:tech.nginx.cache.hits

Nginx Plus cache hits

Per second
autoavgmaxmin
builtin:tech.nginx.cache.misses

Nginx Plus cache misses

Per second
autoavgmaxmin
builtin:tech.nginx.cache.usedSpace

Nginx Plus cache used space

MB
autoavgmaxmin
builtin:tech.nginx.serverZones.active

Active Nginx Plus server zones

Count
autoavgmaxmin

Node.js

Metric key
Name and description
Unit
Aggregations
builtin:tech.nodejs.uvLoop.activeHandles

Node.js: Active handles

Average number of active handles in the event loop

Count
autoavgmaxmin
builtin:tech.nodejs.uvLoop.count

Node.js: Event loop tick frequency

Average number of event loop iterations (per 10 seconds interval)

Count
autoavgmaxmin
builtin:tech.nodejs.uvLoop.loopLatency

Node.js: Event loop latency

Average latency of expected event completion

Nanosecond
autoavgmaxmin
builtin:tech.nodejs.uvLoop.processedLatency

Node.js: Work processed latency

Average latency of a work item being enqueued and callback being called

Nanosecond
autoavgmaxmin
builtin:tech.nodejs.uvLoop.totalTime

Node.js: Event loop tick duration

Average duration of an event loop iteration (tick)

Nanosecond
autoavgmaxmin
builtin:tech.nodejs.uvLoop.utilization

Node.js: Event loop utilization

Event loop utilization represents the percentage of time the event loop has been active

Percent (%)
autoavgmaxmin

Oracle Database

Metric key
Name and description
Unit
Aggregations
builtin:tech.oracleDb.cd.cpu.background

Background CPU usage

Percent (%)
autoavgmaxmin
builtin:tech.oracleDb.cd.cpu.foreground

Foreground CPU usage

Percent (%)
autoavgmaxmin
builtin:tech.oracleDb.cd.cpu.idle

CPU idle

Percent (%)
autoavgmaxmin
builtin:tech.oracleDb.cd.cpu.other

CPU other processes

Percent (%)
autoavgmaxmin
builtin:tech.oracleDb.cd.io.bytesRead

Physical read bytes

Byte
autovalue
builtin:tech.oracleDb.cd.io.bytesWritten

Physical write bytes

Byte
autovalue

PHP

Metric key
Name and description
Unit
Aggregations
builtin:tech.php.phpGc.collectedCount

PHP GC collected count

Count
autoavgcountmaxminsum
builtin:tech.php.phpGc.durationMs

PHP GC collection duration

Millisecond
autoavgcountmaxminsum
builtin:tech.php.phpGc.effectiveness

PHP GC effectiveness

Percent (%)
autoavgcountmaxminsum
builtin:tech.php.phpOpcache.jit.bufferFree

PHP OPCache JIT buffer free

Byte
autoavgmaxmin
builtin:tech.php.phpOpcache.jit.bufferSize

PHP OPCache JIT buffer size

Byte
autoavgmaxmin
builtin:tech.php.phpOpcache.memory.free

PHP OPCache free memory

Byte
autoavgmaxmin

Python

Metric key
Name and description
Unit
Aggregations
builtin:tech.python.gc.collected.gen0

Python GC collected items from gen 0

Count
autoavgmaxmin
builtin:tech.python.gc.collected.gen1

Python GC collected items from gen 1

Count
autoavgmaxmin
builtin:tech.python.gc.collected.gen2

Python GC collected items from gen 2

Count
autoavgmaxmin
builtin:tech.python.gc.collection.gen0

Python GC collections number in gen 0

Count
autoavgmaxmin
builtin:tech.python.gc.collection.gen1

Python GC collections number in gen 1

Count
autoavgmaxmin
builtin:tech.python.gc.collection.gen2

Python GC collections number in gen 2

Count
autoavgmaxmin

RabbitMQ

Metric key
Name and description
Unit
Aggregations
builtin:tech.rabbitmq.cluster_channels

cluster channels

Count
autoavgcountmaxminsum
builtin:tech.rabbitmq.cluster_connections

cluster connections

Count
autoavgcountmaxminsum
builtin:tech.rabbitmq.cluster_consumers

cluster consumers

Count
autoavgcountmaxminsum
builtin:tech.rabbitmq.cluster_exchanges

cluster exchanges

Count
autoavgcountmaxminsum
builtin:tech.rabbitmq.cluster_messages_ack

cluster ack messages

Per second
autoavgcountmaxminsum
builtin:tech.rabbitmq.cluster_messages_deliver_get

cluster delivered and get messages

Per second
autoavgcountmaxminsum

Varnish

Metric key
Name and description
Unit
Aggregations
builtin:tech.varnish.cache.hitRatio

Cache hit ratio

Percent (%)
autoavgmaxmin
builtin:tech.varnish.cache.hitpasses

Cache hits for passes

Per second
autoavgmaxmin
builtin:tech.varnish.cache.hits

Cache hits

Per second
autoavgmaxmin
builtin:tech.varnish.cache.misses

Cache misses

Per second
autoavgmaxmin
builtin:tech.varnish.cache.passes

Cache passes

Per second
autoavgmaxmin
builtin:tech.varnish.connections.backend

Backend connections

Per second
autoavgmaxmin

Web server

Metric key
Name and description
Unit
Aggregations
builtin:tech.webserver.connections.dropped

Dropped connections

Number of dropped connections

Per second
autoavgmaxmin
builtin:tech.webserver.connections.handled

Handled connections

Number of successfully finished and closed requests

Per second
autoavgmaxmin
builtin:tech.webserver.connections.reading

Reading connections

Number of connections which are receiving data from the client

Count
autoavgmaxmin
builtin:tech.webserver.connections.socketWaitingTime

Socket backlog waiting time

Average time needed to queue and handle incoming connections

Microsecond
autovalue
builtin:tech.webserver.connections.waiting

Waiting connections

Number of connections with no active requests

Count
autoavgmaxmin
builtin:tech.webserver.connections.writing

Writing connections

Number of connections which are sending data to the client

Count
autoavgmaxmin

WebSphere

Metric key
Name and description
Unit
Aggregations
builtin:tech.websphere.connectionPool.connectionPoolModule.FreePoolSize

Free pool size

Count
autoavgcountmaxminsum
builtin:tech.websphere.connectionPool.connectionPoolModule.PercentUsed

Percent used

Percent (%)
autoavgcountmaxminsum
builtin:tech.websphere.connectionPool.connectionPoolModule.PoolSize

Pool size

Count
autoavgcountmaxminsum
builtin:tech.websphere.connectionPool.connectionPoolModule.UseTime

In use time

Millisecond
autoavgcountmaxminsum
builtin:tech.websphere.connectionPool.connectionPoolModule.WaitTime

Wait time

Millisecond
autoavgcountmaxminsum
builtin:tech.websphere.connectionPool.connectionPoolModule.WaitingThreadCount

Number of waiting threads

Count
autoavgcountmaxminsum

z/OS

Metric key
Name and description
Unit
Aggregations
builtin:tech.zos.consumed_service_units

z/OS Consumed Service Units per minute

The calculated number of consumed Service Units per minute

Count
autoavgcountmaxminsum