Enrich from Kubernetes
The following configuration example shows how to configure a Collector instance to enrich OTLP telemetry data with additional Kubernetes metadata. This includes, for example, pod, deployment, and cluster details and allows Dynatrace to correctly map the provided telemetry data to the appropriate entities within Dynatrace.
It also uses ActiveGate to enable status and performance monitoring of your Kubernetes cluster, which allows the Dynatrace Kubernetes app to visualize Kubernetes and OpenTelemetry data and map it to the corresponding Kubernetes entities.
Prerequisites
- A deployed ActiveGate for Kubernetes API monitoring
- One of the following Collector distributions with the Kubernetes Attributes and Transform processors:
- The Collector deployed in agent mode
- The API URL of your Dynatrace environment
- An API token with the relevant access scope
- Kubernetes configured for the required role-based access control
Demo configuration
In addition to the Collector configuration, be sure to also update your Kubernetes configuration to match the service account name used in the RBAC file (see entries for Helm, Operator).
receivers:otlp:protocols:grpc:endpoint: 0.0.0.0:4317http:endpoint: 0.0.0.0:4318processors:k8sattributes:extract:metadata:- k8s.pod.name- k8s.pod.uid- k8s.deployment.name- k8s.statefulset.name- k8s.daemonset.name- k8s.cronjob.name- k8s.namespace.name- k8s.node.name- k8s.cluster.uidpod_association:- sources:- from: resource_attributename: k8s.pod.name- from: resource_attributename: k8s.namespace.name- sources:- from: resource_attributename: k8s.pod.ip- sources:- from: resource_attributename: k8s.pod.uid- sources:- from: connectiontransform:error_mode: ignoretrace_statements:- context: resourcestatements:- set(attributes["dt.kubernetes.workload.kind"], "statefulset") where IsString(attributes["k8s.statefulset.name"])- set(attributes["dt.kubernetes.workload.name"], attributes["k8s.statefulset.name"]) where IsString(attributes["k8s.statefulset.name"])- set(attributes["dt.kubernetes.workload.kind"], "deployment") where IsString(attributes["k8s.deployment.name"])- set(attributes["dt.kubernetes.workload.name"], attributes["k8s.deployment.name"]) where IsString(attributes["k8s.deployment.name"])- set(attributes["dt.kubernetes.workload.kind"], "daemonset") where IsString(attributes["k8s.daemonset.name"])- set(attributes["dt.kubernetes.workload.name"], attributes["k8s.daemonset.name"]) where IsString(attributes["k8s.daemonset.name"])- set(attributes["dt.kubernetes.cluster.id"], attributes["k8s.cluster.uid"]) where IsString(attributes["k8s.cluster.uid"])log_statements:- context: resourcestatements:- set(attributes["dt.kubernetes.workload.kind"], "statefulset") where IsString(attributes["k8s.statefulset.name"])- set(attributes["dt.kubernetes.workload.name"], attributes["k8s.statefulset.name"]) where IsString(attributes["k8s.statefulset.name"])- set(attributes["dt.kubernetes.workload.kind"], "deployment") where IsString(attributes["k8s.deployment.name"])- set(attributes["dt.kubernetes.workload.name"], attributes["k8s.deployment.name"]) where IsString(attributes["k8s.deployment.name"])- set(attributes["dt.kubernetes.workload.kind"], "daemonset") where IsString(attributes["k8s.daemonset.name"])- set(attributes["dt.kubernetes.workload.name"], attributes["k8s.daemonset.name"]) where IsString(attributes["k8s.daemonset.name"])- set(attributes["dt.kubernetes.cluster.id"], attributes["k8s.cluster.uid"]) where IsString(attributes["k8s.cluster.uid"])exporters:otlphttp:endpoint: ${env:DT_ENDPOINT}headers:Authorization: "Api-Token ${env:DT_API_TOKEN}"service:pipelines:traces:receivers: [otlp]processors: [k8sattributes, transform]exporters: [otlphttp]metrics:receivers: [otlp]processors: [k8sattributes]exporters: [otlphttp]logs:receivers: [otlp]processors: [k8sattributes]exporters: [otlphttp]
Validate your settings to avoid any configuration issues.
Kubernetes configuration
Configure the following rbac.yaml
file with your Kubernetes instance, to allow the Collector to use the Kubernetes API with the service-account authentication type.
apiVersion: v1kind: ServiceAccountmetadata:labels:app: collectorname: collector---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:name: collectorlabels:app: collectorrules:- apiGroups:- ''resources:- 'pods'- 'namespaces'verbs:- 'get'- 'watch'- 'list'- apiGroups:- 'apps'resources:- 'replicasets'verbs:- 'get'- 'list'- 'watch'- apiGroups:- 'extensions'resources:- 'replicasets'verbs:- 'get'- 'list'- 'watch'---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: collectorlabels:app: collectorroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: collectorsubjects:- kind: ServiceAccountname: collectornamespace: default
Components
For our configuration, we configured the following components.
Receivers
Under receivers
, we specify the standard otlp
receiver as an active receiver component for our Collector instance.
This is mainly for demonstration purposes. You can specify any other valid receiver here (for example, zipkin
).
Processors
Under processors
, we specify the k8sattributes
processor with the following parameters:
extract
—Specifies which information should be extracted.pod_association
—Specifies how the pod information is linked to attributes.
We also configure the transform
processor to have Kubernetes cluster information automatically added as resource attributes for traces.
Exporters
Under exporters
, we specify the default otlphttp
exporter and configure it with our Dynatrace API URL and the required authentication token.
For this purpose, we set the following two environment variables and reference them in the configuration values for endpoint
and Authorization
.
DT_ENDPOINT
contains the base URL of the Dynatrace API endpoint (for example,https://{your-environment-id}.live.dynatrace.com/api/v2/otlp
)DT_API_TOKEN
contains the API token
Service pipelines
Under service
, we assemble our receiver, processor, and exporter objects into pipelines for traces, metrics, and logs. These pipelines allow us to send OpenTelemetry signals via the Collector instance and have them automatically enriched with additional Kubernetes-specific details.
Limits and limitations
Data is ingested using the OpenTelemetry protocol (OTLP) via the Dynatrace OTLP APIs and is subject to the API's limits and restrictions. For more information see: