Monitor Prometheus metrics
Dynatrace version 1.232+
Prometheus is an open-source monitoring and alerting toolkit which is popular in the Kubernetes community. Prometheus scrapes metrics from a number of HTTP(s) endpoints that expose metrics in the OpenMetrics format. See the list of available exporters in the Prometheus documentation.
Dynatrace integrates gauge, counter, and, starting with ActiveGate version 1.245, summary metrics from Prometheus exporters in Kubernetes and makes them available for charting, alerting, and analysis. Starting with ActiveGate version 1.261, there is a limited support for histogram metrics.
A summary datatype is ingested as three metrics:
- A gauge-based metric with the same name as the original exported metric (for example,
go_gc_duration_seconds), containing the buckets as dimensions
- A counter-based metric for the sum, suffixed with
- A counter-based metric for the count, suffixed with
For histogram support, a lightweight solution is provided, where a histogram datatype is ingested as two metrics:
- A counter-based metric for the sum, suffixed with
- A counter-based metric for the count, suffixed with
ActiveGate version 1.217+
We recommend that you use an ActiveGate that is running inside the Kubernetes cluster from which you want to scrape Prometheus metrics. If your ActiveGate is running outside the monitored cluster (for example, in a VM or in a different Kubernetes cluster), it won't be able to scrape the Prometheus endpoint on pods that require authentication (such as RBAC or client authentication). An ActiveGate running inside the cluster will also provide improved performance.
In Dynatrace, go to your Kubernetes cluster settings page and enable
- Monitor Kubernetes namespaces, services, workloads, and pods
- Monitor annotated Prometheus exporters
Annotated pod definitions. For details, see below.
Annotate Prometheus exporter pods
Dynatrace collects metrics from any pods that are annotated with a
metrics.dynatrace.com/scrape property set to
true in the pod definition. This functionality applies to all pods across the entire Kubernetes cluster, regardless of whether the pod is running in a namespace that matches the Dynakube's namespace selector.
Depending on the actual exporter in a pod, you might need to set additional annotations to the pod definition in order to allow Dynatrace to properly ingest those metrics.
Enable metrics scraping required
'true' to enable Dynatrace to collect Prometheus metrics exposed for this pod.
Metrics port required
By default, Prometheus metrics are available at the first exposed TCP port of the pod. Set
metrics.dynatrace.com/port to the respective port.
Path to metrics endpoint optional
metrics.dynatrace.com/path to override the default (
/metrics) Prometheus endpoint.
true if you want to collect metrics that are exposed by an exporter via HTTPS. The default value is
false, because most exporters expose their metrics via HTTP.
Filter metrics optional
metrics.dynatrace.com/filter to define a filter that allows you to include (
"mode": "include") or exclude ((
"mode": "exclude")) a list of metrics. If no filter annotation is defined, all metrics are collected.
The filter syntax also supports the asterisk (
*). This symbol allows you to filter metrics keys that begin with, end with, or contain a particular sequence, such as:
redis_db*filters all metrics starting with
*db*filters all metrics containing
*bytesfilters all metrics ending with
* symbol within a filter, such as
redis_*_bytes, is not supported.
This example shows a simple pod definition with annotations.
The values for
metrics.dynatrace.com/secure depend on the exporter you use; adapt it to your requirements. To determine the port value, see Default port allocations for a list of common ports for known exporters.
For more information on how to annotate pods, see annotations best practices.
Annotate Kubernetes services
Requirements: Add the permission to access services in the Kubernetes ClusterRole (not needed for Dynatrace Operator users, as this is enabled by default in clusterrole-kubernetes-monitoring.yaml).
You can also annotate services instead of pods. Pods corresponding to the Kubernetes services are automatically discovered via the service label selector, which scrapes all pods belonging to the service. Note that pods are not scraped via service. Only the label selectors are supported, but for services, even Kubernetes supports label selectors only.
The service and the corresponding pods need to be in the same namespace.
You can have annotations on services and pods at the same time. If the resulting metric endpoints are identical, they are only scraped once.
For more information on how to annotate services, see Annotation best practices.
Client authentication optional
Requirements: Add the permissions to access
configmaps in the Kubernetes ClusterRole.
Some systems require extra authentication before Dynatrace can scrape them. For such cases, you can set the following additional annotations:
The required certificates/keys are automatically loaded from
configmaps specified in the annotation value.
The schema for the annotation values is
For example, for etcd, the annotations could look as follows:
Role-based access control (RBAC) authorization for metric ingestion
Some exporter pods such as node-exporter, kube-state-metrics, and openshift-state-metrics require RBAC authorization. For these exporter pods, add the following annotation:
Annotation best practices
There are multiple ways to place annotations on pods or services. See below to decide which approach fits your scenario best.
Recommended if you have full control
If you have full control over the pod template or service definition, we recommend adding the annotations by editing these files. This is the most reliable way to ensure persistency of annotations. We recommend editing the pod template over editing the service definition, as this requires fewer permissions (for example, if you don't have access to services).
Pro: Annotations are persistent, so they don't need to be recreated if a pod is removed.
Options if you don't have full control
If you don't have full control over the pod template, you have the following options:
Annotate an existing service (in YAML)
Requirements: Have control over an existing YAML and permission to edit the existing Kubernetes service object.
Pro: Annotations are persistent.
Con: None. Example:
Make sure that
metrics.dynatrace.com/portpoints to the target port of the service, the container's pod, instead of service's port, because the service is not used for proxying the scraping process.1kind: Service2apiVersion: v13metadata:4 name: dynatrace-monitoring-node-exporter5 namespace: kubernetes-monitoring6 template:7 metadata:8 metrics.dynatrace.com/port: '9100'9 metrics.dynatrace.com/scrape: 'true'10 metrics.dynatrace.com/secure: 'true'11 metrics.dynatrace.com/path: '/metrics'12spec:13 ports:14 - name: dynatrace-monitoring-node-exporter-port15 port: 910016 targetPort: 1207117 selector:18 app.kubernetes.io/name: node-exporter
Create a new service (in YAML)
Requirements: The new service should be created with a name that preferably starts with the
dynatrace-monitoring-prefix. This service must be in the same namespace as the pods, and have permission to create a Kubernetes service object. The service is preferably headless (
clusterIPis set to
None) since it emphasizes that the service is not used for proxying.
Pro: You have control over the original workload/service.
Con: A label selector sync is required. We support only the label selector.
The values of
metrics.dynatrace.com/securedepend on the exporter you use; adapt them to your requirements. To determine the port value, see Default port allocations for a list of common ports for known exporters.1kind: Service2apiVersion: v13metadata:4 name: dynatrace-monitoring-node-exporter5 namespace: kubernetes-monitoring6 annotations:7 metrics.dynatrace.com/port: '12071'8 metrics.dynatrace.com/scrape: 'true'9 metrics.dynatrace.com/secure: 'true'10 metrics.dynatrace.com/path: '/metrics'11spec:12 ports:13 - name: dynatrace-monitoring-node-exporter-port14 port: 1207115 selector:16 app.kubernetes.io/name: node-exporter17 clusterIP: None
Annotate an existing service (in CLI)
Requirements: Have permission to edit the existing Kubernetes service object.
Pro: No label selector sync is required.
Con: Annotations aren't persistent, so changes will overwrite the annotations. We support only the label selector.
Annotate existing pods (in CLI)
Pro: You can quickly test metric ingestion.
Con: Annotations aren't persistent, so changes will overwrite the annotations.
View metrics on a dashboard
Metrics from Prometheus exporters are available in the Data Explorer for custom charting. Select Create custom chart and select Try it out in the top banner. For more information, see Data explorer.
You can simply search for metric keys of all available metrics and define how you’d like to analyze and chart your metrics. After that you can pin your charts on a dashboard.
You can also create custom alerts based on the Prometheus scraped metrics. From the navigation menu, select Settings > Anomaly detection > Metric events and select Add metric event. In the Add metric event page, search for a Prometheus metric using its key and define your alert. For more information, see Metric events for alerting.
The current limitations of the Prometheus metrics integration are as follows:
Multiple exporters in a pod
Multiple exporters currently aren't supported; you can only select the exporter that is being used with the
Number of pods, metrics, and metric data points
This integration supports a maximum of
1,000 exporter pods
1,000 metrics per pod
200,000 metric data points
Even though larger datasets are allowed, these can lead to ingestion gaps, as Dynatrace collects all metrics every minute before sending them to the cluster.
There are two distinct methods of monitoring technologies:
The first method involves using the Extensions 2.0 framework, which supports a handful of extensions for Prometheus exporters out of the box.
This method provides comprehensive monitoring features, including technology-specific dashboards, alerting capabilities, topology visualization, and entity pages. However, this method operates outside of Kubernetes.
The second method involves annotating Prometheus pods within Kubernetes to scrape Prometheus exporters.
While this method provides a more Kubernetes-native approach, it currently offers minimal functional overlap with the features provided by the Extensions 2.0 framework.
These two methods serve different contexts, work independently from each other, and don't share the same metrics.
Prometheus metrics in Kubernetes environments are subject to DDU consumption.
- Prometheus metrics from exporters that run on OneAgent-monitored hosts are first deducted from your quota of included metrics per host unit. Once this quota is exceeded, the remaining metrics consume DDUs.
- Prometheus metrics from exporters that run on hosts that aren't monitored by OneAgent always consume DDUs.
To troubleshoot Prometheus integration issues, download the Kubernetes Monitoring Statistics extension.