How it works

This section provides an in-depth look at how Dynatrace components are deployed and how they interact with Kubernetes clusters and entities.

Classic full-stack injection

Capabilities

  • It has a seamless host (Kubernetes node) integration. Instrumented pods maintain their taxonomic relationship with hosts and host metrics. Host agents complement code modules with OOM detection, disk and storage monitoring, network monitoring, and more.
  • It's comprehensive. This all-in-one approach includes Kubernetes cluster monitoring, distributed tracing, fault domain isolation, and deep code-level insights using a single deployment configuration across your clusters.

Limitations

There’s a startup dependency between the container in which OneAgent is deployed and application containers to be instrumented (for example, containers that have deep process monitoring enabled). The OneAgent container must be started and the oneagenthelper process must be running before the application container is launched so that the application can be properly instrumented.

Deployed resources

Initially, the following resources are deployed via helm/manifest:

  • Dynatrace Operator manages the automated rollout, configuration, and lifecycle management of Dynatrace components.

  • Dynatrace webhook validates DynaKube definitions for correctness and converts DynaKubes with old API versions.

Dynatrace Operator manages DynaKubes with classic full stack configuration and deploys the following resources:

  • Dynatrace OneAgent, deployed as a DaemonSet, collects host metrics from Kubernetes nodes. It also detects new containers and injects OneAgent code modules into application pods.

  • Dynatrace ActiveGate is used for routing observability data to the Dynatrace cluster.

classic-full-stack

Classic full-stack injection requires write access from the OneAgent pod to the Kubernetes node file system to detect and inject into newly deployed containers.

Cloud native full-stack injection

Capabilities

Current limitations

  • Container monitoring rules aren't supported (the DynaKube label selector parameter provides similar functionality).
  • Go static monitoring is partially supported.
  • OneAgent support archives, for example code module logs, can be gathered from the monitored process/pod through the Run OneAgent Diagnostics menu option on the process-specific page. If no OneAgent support archive is available, it means either:
    • No code module has been injected into the application pod.
    • There is an issue with OneAgent creating the support archive.

Deployed resources

Initially, the following resources are deployed via helm/manifest:

  • Dynatrace Operator manages the automated rollout, configuration, and lifecycle management of Dynatrace components.

  • Dynatrace webhook modifies pod definitions to include Dynatrace code modules for application observability, validates DynaKube definitions for correctness, and converts DynaKubes with old API versions.

  • Dynatrace CSI driver, deployed as a DaemonSet, provides writable volume storage for OneAgent and OneAgent binaries to pods.

Dynatrace Operator manages DynaKubes with cloud-native full-stack configuration and deploys the following resources:

  • Dynatrace OneAgent, deployed as a DaemonSet, collects host metrics from Kubernetes nodes.

  • Dynatrace ActiveGate is used for routing observability data to the Dynatrace cluster.

cloud-native

Kubernetes Platform Monitoring

Kubernetes Platform Monitoring sets the foundation for understanding and troubleshooting your Kubernetes clusters. There is no OneAgent and no application monitoring included. However, Kubernetes Platform Monitoring is usually combined with other monitoring/injection approaches.

Capabilities

  • Delivers insights into the health and utilization of your Kubernetes clusters as well as relations between Kubernetes objects (topology)
  • Uses the Kubernetes API and cAdvisor to get node and container level metrics and Kubernetes events
  • Provides out-of-the-box alerting and anomaly detection for symptomatic workloads, pods, nodes, and clusters

Deployed resources

Initially, the following resources are deployed via helm/manifest:

  • Dynatrace Operator manages the automated rollout, configuration, and lifecycle management of Dynatrace components.

  • Dynatrace webhook validates DynaKube definitions for correctness and converts DynaKubes with old API versions.

Dynatrace Operator manages DynaKubes with Kubernetes Platform Monitoring configuration and deploys the following resources:

  • Dynatrace ActiveGate is used for monitoring Kubernetes objects by collecting data (metrics, events, status) from the Kubernetes API and Kubelet (cAdvisor).

k8s-monitoring

Application-only monitoring: Automatic injection

You can use the application-only injection strategy for application pods. You don't install OneAgent pods and can't collect host metrics from Kubernetes nodes. You can collect node and container metrics by combining it with Kubernetes Platform Monitoring.

Capabilities

  • It's engineered for Kubernetes. Dynatrace injects into pods using the Kubernetes admission controller, which injects a Dynatrace code module into application containers.
  • It's flexible. You get granular control over the instrumented pods using namespaces and annotations. You can easily route pod metrics to different Dynatrace environments within the same Kubernetes cluster.
  • Enables data enrichment for Kubernetes environments.

Current limitations

  • Diagnostic files (support archives) for application pods aren't yet supported.
  • Go static monitoring is partially supported.

When deployed in application-only mode, the Dynatrace code modules monitor the memory, disk, CPU, and networking of processes within the container only. Host metrics aren't monitored. Without Kubernetes Platform Monitoring, topology is limited to pods and containers.

Deployed resources

Initially, the following resources are deployed via helm/manifest:

  • Dynatrace Operator manages the automated rollout, configuration, and lifecycle management of Dynatrace components.

  • Dynatrace webhook modifies pod definitions to include Dynatrace code modules for application observability, validates DynaKube definitions for correctness, and converts DynaKubes with old API versions.

  • Dynatrace CSI driver, deployed as a DaemonSet, provides writable volume storage for OneAgent binaries to pods. Although it's optional, we highly recommend using csi-driver to minimize network and storage usage. For details, see CSI driver.

Dynatrace Operator manages DynaKubes with application monitoring configuration and deploys the following resources:

  • Dynatrace ActiveGate is used for routing observability data to the Dynatrace cluster.

auto-injection

Application-only monitoring: Pod runtime injection

You can use the application-only injection strategy for application pods. You don't install OneAgent pods and can't collect host metrics from Kubernetes nodes. You can collect node and container metrics by combining it with Kubernetes Platform Monitoring.

Capabilities

  • It's Kubernetes native. Dynatrace code modules are injected into pods using Kubernetes init containers.
  • It's flexible. Different container images can contain separate configurations for different Dynatrace environments.

Limitations

  • Because there is no Dynatrace Operator involved, there is no automatic injection, configuration, or enrichment. To instrument your applications with Dynatrace, you need to manually adapt your Kubernetes workloads.

PodRuntime Illustration

Application-only monitoring: Container build-time injection

You can use the application-only injection strategy for application pods. You don't install OneAgent pods and can't collect host metrics from Kubernetes nodes. You can collect node and container metrics by combining it with Kubernetes Platform Monitoring.

Capabilities

  • It has a static container injection. Dynatrace code modules are embedded into container images as they are built.
  • It's flexible. Different container images can contain separate configurations for different Dynatrace environments. You can use these images on any container platform or PaaS in addition to Kubernetes.

Limitations

  • Because there is no Dynatrace Operator involved, there is no automatic injection, configuration, or enrichment. To instrument your applications with Dynatrace, you need to manually adapt your build processes.

BuildTimeInjection illustration

Host monitoring

Capabilities

Collects host metrics and process data.

Limitations

Diagnostic files (support archives) for application pods aren't yet supported for read-only file systems.

Deployed resources

Initially, the following resources are deployed via helm/manifest:

  • Dynatrace Operator manages the automated rollout, configuration, and lifecycle management of Dynatrace components.

  • Dynatrace webhook validates DynaKube definitions for correctness and converts DynaKubes with old API versions.

  • Dynatrace CSI driver, deployed as a DaemonSet, provides writable volume storage for OneAgent. For details, see CSI driver.

Dynatrace Operator manages DynaKubes with host monitoring configuration:

  • Dynatrace OneAgent, deployed as a DaemonSet, collects host metrics from Kubernetes nodes.

  • Dynatrace ActiveGate is used for routing observability data to the Dynatrace cluster.

host-monitoring

CSI driver

The Dynatrace CSI driver is a key component used to provide OneAgent CodeModules for the application pods, while minimizing storage usage, and load on the Dynatrace environment. In addition, it provides writable volume storage for OneAgent, code-module configurations, and logs utilizing ephemeral local volumes.

Capabilities

  • Minimizes downloads

    The Dynatrace CSI driver downloads the code modules once per node and stores them on the node's filesystem.

    Example:

    • With the CSI driver, injecting 100 pods spread across 3 nodes would result in just 3 code modules downloads.
    • Without the CSI driver, each pod would need to download its own code modules, so injecting 100 pods would result in the download of 100 code modules.

    Minimizes downloads

  • Minimizes storage usage

    The Dynatrace CSI driver enables the code modules to be stored on the node's filesystem, and the driver creates an OverlayFs mount for each injected pod.

    Example:

    • With the CSI driver, injecting 100 pods spread across 3 nodes would result in the storage of only 3 code modules.
    • Without the CSI driver, each pod stores a code module, so injecting 100 pods would result in the storage of 100 code modules.

    Minimizes storage usage

Summary

The Dynatrace CSI driver significantly reduces network usage by downloading code modules once per node, as opposed to once per pod. It also optimizes storage by storing code modules once per node and providing the code modules to pods using OverlayFs mounts.

Summary

Resource limits

The CSI Driver provisioner container does not have any predefined resource limits. However, if you wish to set resource limits, you can do so when deploying Dynatrace Operator with Helm.

To set resource limits, modify the values.yaml. See an example configuration below.

provisioner:
resources:
requests:
cpu: 300m
memory: 100Mi
limits:
cpu: 300m
memory: 100Mi

Privileges

The Dynatrace CSI driver requires elevated permissions to create and manage mounts on the host system. Specifically, the `mountPropagation: Bidirectional permission is needed on the volume where the CSI driver stores the code modules. This permission is only available for privileged containers.