Dynatrace Operator manages the automated rollout, configuration, and lifecycle of Dynatrace components. It uses custom resources of kind DynaKube and EdgeConnect.
Dynatrace Operator's functionality is tied to custom resources. These resources define which features are enabled. When a custom resource is created, Dynatrace Operator triggers an initial reconciliation to ensure that the desired state is achieved. The reconciliation process regularly detects changes in the Cluster and adjusts the state accordingly. The frequency of reconciliation decreases after the initial rollout, as fewer changes are typically made.
On startup, Dynatrace Operator registers the webhook with the Kubernetes API. During each reconcile loop, Dynatrace Operator checks for changes in custom resources to:
This functionality is constrained to the namespace where Dynatrace Operator is deployed.
Namespaces monitored by Dynatrace Operator should be configured as follows:
namespaceSelector
so the webhook can mutate pods.Default configuration:
A single Dynatrace Operator replica is typically sufficient due to leader election. Additional replicas will only activate when the current leader terminates and a new one is elected.
A Dynatrace Operator container has predefined CPU and memory requests and limits. To customize these values during deployment with Helm, modify the values.yaml
file.
operator:requests:cpu: 50mmemory: 64Milimits:cpu: 100mmemory: 128Mi
A full list of resources accessed by Dynatrace Operator is available in the Security documentation. Network traffic and figures are documented in the Network traffic documentation.
Dynatrace webhook modifies Pod definitions to inject Dynatrace code modules for Application observability. It also validates DynaKube definitions and converts DynaKubes with older API versions.
Webhook configurations are managed by Dynatrace Operator and are updated periodically. These updates ensure that the Kubernetes API can continue to communicate with the webhook.
Pod mutation—The webhook mutates pods by modifying their definitions to include necessary metadata for Application observability.
Namespace mutation—The webhook mutates namespaces to enable the monitoring of pods within those namespaces.
CREATE
events.Webhook configurations only support a static namespace selector. The required label is added to newly created namespaces.
Configuration validation
Check DynaKube parameters and EdgeConnect parameters for more information on each field!
Conversion between versions
Default configuration:
The Dynatrace Webhook container has default requests and limits defined. If you want to set different resource requests or limits, you can do so when deploying Dynatrace Operator with Helm.
To set resource limits, modify the values.yaml
. See the default configuration below.
webhook:requests:cpu: 300mmemory: 128Milimits:cpu: 300mmemory: 128Mi
A full list of resources that are accessed by the webhook can be found in Dynatrace Operator security. Ingress and egress is documented in Network traffic.
The Dynatrace CSI driver provides OneAgent code modules for the application pods, while minimizing storage usage, and load on the Dynatrace environment. In addition, it provides writable volume storage for OneAgent, code-module configurations, and logs utilizing ephemeral local volumes.
applicationMonitoring
configurations, it provides the necessary OneAgent binary for application monitoring to the pods on each Node.hostMonitoring
configurations, it provides a writable folder for the OneAgent configurations when a read-only host file system is used.cloudNativeFullStack
, it provides both of the above.Minimizes downloads by downloading the code modules once per Node and storing them on the Node's filesystem.
Minimizes storage usage by storing the code modules on the Node's filesystem, and creating an OverlayFs mount for each injected Pod.
Default configuration:
The CSI Driver provisioner does not have any predefined resource limits. However, if you want to set resource limits or modify resource limits for other containers of the CSI Driver, you can do so when deploying Dynatrace Operator with Helm.
To set resource limits for the provisioner
container, modify the values.yaml
. See the example configuration below.
provisioner:resources:requests:cpu: 300mmemory: 100Milimits:cpu: 300mmemory: 100Miserver:resources:requests:cpu: 50mmemory: 100Milimits:cpu: 50mmemory: 100Mi
The Dynatrace CSI driver requires elevated permissions to create and manage mounts on the host system. Specifically, the mountPropagation: Bidirectional permission is needed on the volume where the CSI driver stores the code modules. This permission is only available for privileged containers.
A full list of resources that are accessed by the CSI Driver can be found in the Security documentation.
Ingress and Egress is documented in the Network traffic documentation.