This section provides an in-depth look at how Dynatrace components are deployed and how they interact with Kubernetes clusters and entities.
There’s a startup dependency between the container in which OneAgent is deployed and application containers to be instrumented (for example, containers that have deep process monitoring enabled). The OneAgent container must be started and the oneagenthelper
process must be running before the application container is launched so that the application can be properly instrumented.
Initially, the following resources are deployed via helm/manifest:
Dynatrace Operator manages the automated rollout, configuration, and lifecycle management of Dynatrace components.
Dynatrace webhook validates DynaKube definitions for correctness and converts DynaKubes with old API versions.
Dynatrace Operator manages DynaKubes with classic full stack configuration and deploys the following resources:
Dynatrace OneAgent, deployed as a DaemonSet, collects host metrics from Kubernetes nodes. It also detects new containers and injects OneAgent code modules into application pods.
Dynatrace ActiveGate is used for routing observability data to the Dynatrace cluster.
Classic full-stack injection requires write access from the OneAgent pod to the Kubernetes node file system to detect and inject into newly deployed containers.
Initially, the following resources are deployed via helm/manifest:
Dynatrace Operator manages the automated rollout, configuration, and lifecycle management of Dynatrace components.
Dynatrace webhook modifies pod definitions to include Dynatrace code modules for application observability, validates DynaKube definitions for correctness, and converts DynaKubes with old API versions.
Dynatrace CSI driver, deployed as a DaemonSet, provides writable volume storage for OneAgent and OneAgent binaries to pods.
Dynatrace Operator manages DynaKubes with cloud-native full-stack configuration and deploys the following resources:
Dynatrace OneAgent, deployed as a DaemonSet, collects host metrics from Kubernetes nodes.
Dynatrace ActiveGate is used for routing observability data to the Dynatrace cluster.
Kubernetes Platform Monitoring sets the foundation for understanding and troubleshooting your Kubernetes clusters. There is no OneAgent and no application monitoring included. However, Kubernetes Platform Monitoring is usually combined with other monitoring/injection approaches.
Initially, the following resources are deployed via helm/manifest:
Dynatrace Operator manages the automated rollout, configuration, and lifecycle management of Dynatrace components.
Dynatrace webhook validates DynaKube definitions for correctness and converts DynaKubes with old API versions.
Dynatrace Operator manages DynaKubes with Kubernetes Platform Monitoring configuration and deploys the following resources:
You can use the application-only injection strategy for application pods. You don't install OneAgent pods and can't collect host metrics from Kubernetes nodes. You can collect node and container metrics by combining it with Kubernetes Platform Monitoring.
When deployed in application-only mode, the Dynatrace code modules monitor the memory, disk, CPU, and networking of processes within the container only. Host metrics aren't monitored. Without Kubernetes Platform Monitoring, topology is limited to pods and containers.
Initially, the following resources are deployed via helm/manifest:
Dynatrace Operator manages the automated rollout, configuration, and lifecycle management of Dynatrace components.
Dynatrace webhook modifies pod definitions to include Dynatrace code modules for application observability, validates DynaKube definitions for correctness, and converts DynaKubes with old API versions.
Dynatrace CSI driver, deployed as a DaemonSet, provides writable volume storage for OneAgent binaries to pods. Although it's optional, we highly recommend using csi-driver to minimize network and storage usage. For details, see CSI driver.
Dynatrace Operator manages DynaKubes with application monitoring configuration and deploys the following resources:
You can use the application-only injection strategy for application pods. You don't install OneAgent pods and can't collect host metrics from Kubernetes nodes. You can collect node and container metrics by combining it with Kubernetes Platform Monitoring.
You can use the application-only injection strategy for application pods. You don't install OneAgent pods and can't collect host metrics from Kubernetes nodes. You can collect node and container metrics by combining it with Kubernetes Platform Monitoring.
Collects host metrics and process data.
Diagnostic files (support archives) for application pods aren't yet supported for read-only file systems.
Initially, the following resources are deployed via helm/manifest:
Dynatrace Operator manages the automated rollout, configuration, and lifecycle management of Dynatrace components.
Dynatrace webhook validates DynaKube definitions for correctness and converts DynaKubes with old API versions.
Dynatrace CSI driver, deployed as a DaemonSet, provides writable volume storage for OneAgent. For details, see CSI driver.
Dynatrace Operator manages DynaKubes with host monitoring configuration:
Dynatrace OneAgent, deployed as a DaemonSet, collects host metrics from Kubernetes nodes.
Dynatrace ActiveGate is used for routing observability data to the Dynatrace cluster.
The Dynatrace CSI driver is a key component used to provide OneAgent CodeModules for the application pods, while minimizing storage usage, and load on the Dynatrace environment. In addition, it provides writable volume storage for OneAgent, code-module configurations, and logs utilizing ephemeral local volumes.
Minimizes downloads
The Dynatrace CSI driver downloads the code modules once per node and stores them on the node's filesystem.
Example:
Minimizes storage usage
The Dynatrace CSI driver enables the code modules to be stored on the node's filesystem, and the driver creates an OverlayFs mount for each injected pod.
Example:
The Dynatrace CSI driver significantly reduces network usage by downloading code modules once per node, as opposed to once per pod. It also optimizes storage by storing code modules once per node and providing the code modules to pods using OverlayFs mounts.
The CSI Driver provisioner container does not have any predefined resource limits. However, if you wish to set resource limits, you can do so when deploying Dynatrace Operator with Helm.
To set resource limits, modify the values.yaml
. See an example configuration below.
provisioner:resources:requests:cpu: 300mmemory: 100Milimits:cpu: 300mmemory: 100Mi
The Dynatrace CSI driver requires elevated permissions to create and manage mounts on the host system. Specifically, the `mountPropagation: Bidirectional permission is needed on the volume where the CSI driver stores the code modules. This permission is only available for privileged containers.