Dynatrace provides integrated Log management and analytics for your Kubernetes environments. We recommend collecting logs in Kubernetes using our fully managed Dynatrace Log module, either integrated in the OneAgent deployed on the node (OneAgent Log module) or without OneAgent as a standalone deployment (Kubernetes Log module). Dynatrace Operator configures and manages the Dynatrace Log module for both approaches. Alternatively, you can stream logs to Dynatrace using log collectors such as Fluent Bit, Dynatrace OpenTelemetry Collector, Logstash, or Fluentd.
Dynatrace provides a flexible approach to Kubernetes observability where you can pick and choose the level of observability you need for your Kubernetes clusters. The Dynatrace Operator manages all the components needed to get the data into Dynatrace for you. This also applies to collecting logs from Kubernetes containers. Depending on the selected observability option, the Dynatrace Operator configures and manages the Dynatrace Log module to work in conjunction with or without a OneAgent on the node. The Kubernetes Log module is used in combination with Kubernetes platform monitoring or Application observability, whereas the OneAgent Log module is used as part of Full-Stack observability.
Log monitoring value
Kubernetes Log module
OneAgent Log module
Auto discovery of container logs
Control ingest via annotations and labels
Log enrichment with Kubernetes metadata
Log enrichment with process context
Report logs to different Dynatrace environments
Dynatrace Operator for managing the rollout and lifecycle
Deployment
OneAgent Log module integrates with OneAgent on node
Kubernetes Log module deployed as DaemonSet
Supported Observability Options
For pods with Application observability enabled
Automated updates of the Kubernetes Log module are planned for future releases
The Dynatrace Log module reads logs from containerd and cri-o containers. Other container runtimes aren't supported. It only captures logs that are written to the container's stdout/stderr streams.
See supported Kubernetes/OpenShift platform versions and distributions to learn more.
Before installing Dynatrace on your Kubernetes cluster, ensure that you meet the following requirements:
kubectl
CLI is connected to the Kubernetes cluster that you want to monitor.kubectl
or oc
commands.Installing Dynatrace with Full-Stack observability automatically deploys the OneAgent on each node in your Kubernetes cluster. The OneAgent running on the node already includes the OneAgent Log module. To enable log monitoring, you only need to add the spec.logMonitoring: {}
section to your DynaKube custom resource. Below is an example configuration:
apiVersion: dynatrace.com/v1beta5kind: DynaKubemetadata:name: dynakubenamespace: dynatrace# annotations:# feature.dynatrace.com/oneagent-privileged: "true" # Required on Openshift# Link to api reference for further information: https://docs.dynatrace.com/docs/ingest-from/setup-on-k8s/reference/dynakube-parametersspec:apiUrl: https://ENVIRONMENTID.live.dynatrace.com/apimetadataEnrichment:enabled: trueoneAgent:cloudNativeFullStack:tolerations:- effect: NoSchedulekey: node-role.kubernetes.io/masteroperator: Exists- effect: NoSchedulekey: node-role.kubernetes.io/control-planeoperator: ExistsactiveGate:capabilities:- routing- kubernetes-monitoring- debuggingresources:requests:cpu: 500mmemory: 1.5Gilimits:cpu: 1000mmemory: 1.5GilogMonitoring: {}
We recommend to review the Collect all containers logs feature flag within your settings to ensure best coverage of your logs within Kubernetes. For advanced configuration options, see Stream Kubernetes logs with OneAgent Log Module.
The following guides assumes that you've already succesfully installed the Dynatrace Operator on your Kubernetes cluster. If you haven't done so, please follow the instructions in Install Dynatrace on Kubernetes.
To add the Kubernetes Log module to your existing Dynatrace installation, follow these steps:
Edit your existing DynaKube custom resource YAML file.
You can review the available parameters or how-to guides, and adapt the DynaKube custom resource according to your requirements.
In order to enable the Kubernetes log module, you need to add two sections to your DynaKube custom resource. The first section enables the Log module (spec.logMonitoring
), and the second section configures the Log module (spec.templates.logMonitoring
). Below is an example configuration:
apiVersion: dynatrace.com/v1beta5kind: DynaKubemetadata:name: dynakubenamespace: dynatrace# annotations:# feature.dynatrace.com/oneagent-privileged: "true" # Required on Openshiftspec:# Link to api reference for further information: https://docs.dynatrace.com/docs/ingest-from/setup-on-k8s/reference/dynakube-parametersapiUrl: https://ENVIRONMENTID.live.dynatrace.com/apimetadataEnrichment:enabled: truelogMonitoring: {}activeGate:capabilities:- kubernetes-monitoringresources:requests:cpu: 500mmemory: 512Milimits:cpu: 1000mmemory: 1.5Gitemplates:logMonitoring:imageRef:repository: public.ecr.aws/dynatrace/dynatrace-logmoduletag: <tag>tolerations:- effect: NoSchedulekey: node-role.kubernetes.io/masteroperator: Exists- effect: NoSchedulekey: node-role.kubernetes.io/control-planeoperator: Exists# Optionally set resource requests/limits for the# Kubernetes log module. (applies to init and main container)# resources:# requests:# cpu:# memory:# limits:# cpu:# memory:
To retrieve the <tag>
version for the logMonitoring
template:
1.311.70.20250416-094918
.To update the Kubernetes Log module, you'll need to manually update the tag
in the logMonitoring
template and apply the changes to your DynaKube custom resource. Automatic updates of the Kubernetes Log module are planned for future releases.
Re-Apply the DynaKube custom resource
Run the command below to apply the DynaKube custom resource, making sure to replace <your-DynaKube-CR>
with your actual DynaKube custom resource file name. A validation webhook will provide helpful error messages if there's a problem.
kubectl apply -f <your-DynaKube-CR>.yaml
optional Verify deployment
Verify that your DynaKube is running and all pods in your Dynatrace namespace are running and ready.
> kubectl get dynakube -n dynatraceNAME APIURL STATUS AGEdynakube https://<ENVIRONMENTID>.live.dynatrace.com/api Running 45s
In a default DynaKube configuration, you should see the following pods:
> kubectl get pods -n dynatraceNAME READY STATUS RESTARTS AGEdynakube-activegate-0 1/1 Running 0 55sdynakube-logmonitoring-grrnd 1/1 Running 0 55sdynakube-logmonitoring-ptjgk 1/1 Running 0 55sdynakube-logmonitoring-rtc2p 1/1 Running 0 55sdynatrace-oneagent-csi-driver-2twgv 4/4 Running 0 5mdynatrace-oneagent-csi-driver-jbwdv 4/4 Running 0 5mdynatrace-oneagent-csi-driver-t68tt 4/4 Running 0 5mdynatrace-operator-74dbb44b57-g58mn 1/1 Running 0 5mdynatrace-webhook-59b69958d6-82wlr 1/1 Running 0 5mdynatrace-webhook-59b69958d6-d9vqd 1/1 Running 0 5m
As the OneAgent Log module is deployed as DaemonSet you should have a Log monitoring pod on each node.
Kubernetes Log monitoring requires Dynatrace Platform Subscription (DPS) and is licensed by the ingested gibibyte (GiB) volume.
You can configure log ingestion rules in Dynatrace to control which logs should be collected from your Kubernetes environment. The rules leverage Kubernetes metadata and other common log entry attributes, such as the Kubernetes namespace name, to determine which logs are to be ingested. The standard log processing features from OneAgent, including sensitive data masking, timestamp configuration, log boundary definition, and automatic enrichment of log records, are also available for Kubernetes logs.
See Stream Kubernetes logs with Dynatrace Log Module for a detailed description, use cases, and REST API examples.