Enrich from Kubernetes
The following configuration example shows how you configure a Collector instance to enrich OTLP request with additional Kubernetes metadata.
Demo configuration
1receivers:2 otlp:3 protocols:4 grpc:5 http:67processors:8 resource:9 attributes:10 - key: dt.kubernetes.cluster.id11 from_attribute: k8s.cluster.uid12 action: insert13 k8sattributes:14 auth_type: "serviceAccount"15 passthrough: false16 extract:17 metadata:18 - k8s.pod.name19 - k8s.pod.uid20 - k8s.deployment.name21 - k8s.namespace.name22 - k8s.node.name23 - k8s.cluster.uid24 pod_association:25 - sources:26 - from: resource_attribute27 name: k8s.pod.name28 - from: resource_attribute29 name: k8s.namespace.name3031exporters:32 otlphttp:33 endpoint: $DT_ENDPOINT/api/v2/otlp34 headers:35 Authorization: "Api-Token $DT_API_TOKEN"3637service:38 pipelines:39 traces:40 receivers: [otlp]41 processors: [k8sattributes, resource]42 exporters: [otlphttp]43 metrics:44 receivers: [otlp]45 processors: [k8sattributes, resource]46 exporters: [otlphttp]47 logs:48 receivers: [otlp]49 processors: [k8sattributes, resource]50 exporters: [otlphttp]
Prerequisites
A deployed ActiveGate for Kubernetes API monitoring
- Contrib distribution or a custom Builder version with the Kubernetes Attributes and Resource processors
- Collector deployed in agent mode
- The API URL of your Dynatrace environment
- An API token with the relevant access scope
- Kubernetes configured for the required role-based access control
Kubernetes configuration
Configure the following rbac.yaml
file with your Kubernetes instance, to allow the Collector to use the Kubernetes API with the service-account authentication type.
1apiVersion: v12kind: ServiceAccount3metadata:4 labels:5 app: otelcontribcol6 name: otelcontribcol7---8apiVersion: rbac.authorization.k8s.io/v19kind: ClusterRole10metadata:11 name: otelcontribcol12 labels:13 app: otelcontribcol14rules:15 - apiGroups:16 - ""17 resources:18 - events19 - namespaces20 - namespaces/status21 - nodes22 - nodes/spec23 - nodes/stats24 - nodes/proxy25 - pods26 - pods/status27 - replicationcontrollers28 - replicationcontrollers/status29 - resourcequotas30 - services31 - endpoints32 verbs:33 - get34 - list35 - watch36 - apiGroups:37 - "events.k8s.io"38 resources:39 - events40 verbs:41 - watch42 - apiGroups:43 - apps44 resources:45 - daemonsets46 - deployments47 - replicasets48 - statefulsets49 verbs:50 - get51 - list52 - watch53 - apiGroups:54 - extensions55 resources:56 - daemonsets57 - deployments58 - replicasets59 verbs:60 - get61 - list62 - watch63 - apiGroups:64 - batch65 resources:66 - jobs67 - cronjobs68 verbs:69 - get70 - list71 - watch72 - apiGroups:73 - autoscaling74 resources:75 - horizontalpodautoscalers76 verbs:77 - get78 - list79 - watch80---81apiVersion: rbac.authorization.k8s.io/v182kind: ClusterRoleBinding83metadata:84 name: otelcontribcol85 labels:86 app: otelcontribcol87roleRef:88 apiGroup: rbac.authorization.k8s.io89 kind: ClusterRole90 name: otelcontribcol91subjects:92 - kind: ServiceAccount93 name: otelcontribcol94 namespace: default
Components
For our configuration, we configured the following components.
Receiver
Under receivers
, we specify the standard otlp
receiver as an active receiver component for our Collector instance.
This is mainly for demonstration purposes. You can specify any other valid receiver here (for example, zipkin
).
Processor
Under processors
, we specify the k8sattributes
processor with the following parameters:
auth_type
—Specifies to use a service account to obtain the necessary data.passthrough
—Set tofalse
, in order not to forward IP address related information. Should betrue
when running as gateway.extract
—Specifies which information should be extracted.pod_association
—Specifies how the pod information is linked to attributes.
We also configure the resource
processor to have the Kubernetes cluster identifier automatically added as resource attribute.
Exporter
Under exporters
, we specify the default otlphttp
exporter and configure it with our Dynatrace API URL and the required authentication token.
For this purpose, we set the following two environment variables and reference them in the configuration values for endpoint
and Authorization
.
DT_ENDPOINT
contains the base URL of your ActiveGate (for example,https://{your-environment-id}.live.dynatrace.com
)DT_API_TOKEN
contains the API token
Service pipeline
Under service
, we assemble our receiver, processor, and exporter objects into pipelines for traces, metrics, and logs. These pipelines allow us to send OpenTelemetry signals via the Collector instance and have them automatically enriched with additional Kubernetes-specific details.