Release date: January 27th, 2026
Dynatrace Operator version 1.8 introduces a new default and recommended DynaKube CRD version v1beta6. We encourage you to update your existing DynaKube resources to this latest version to take advantage of new features and enhancements.
Dynatrace Operator can now automatically configure applications instrumented with OpenTelemetry to export traces, metrics and logs to Dynatrace via the OTLP exporter auto-configuration feature. This simplifies configuration management and helps ensuring consistent telemetry ingest across environments.
For configuration details and examples, see the OTLP auto-configuration guide.
dynatrace.com/split-mounts to avoid conflicts with application images that already contain a /var/lib/dynatrace directory. This annotation is mainly intended to allow codemodules injection in ActiveGate pods. It is enabled by default on ActiveGates managed by Dynatrace Operator.GOMEMLIMIT. Memory limits can be provided via Helm using IEC suffixes (e.g. 123Mi)..spec.caCertsRef in requests to the Dynatrace API when rolling out EdgeConnect.csidriver.priorityClassValue. For guidance, see Use priorityClass for critical Dynatrace components.Dynatrace Operator now emits a Kubernetes warning event when the DynaKube or EdgeConnect CRD version is not the latest one supported by this Operator release. This makes it easier to identify outdated DynaKube CRD versions.
events.patch permission for the dynatrace namespace.dynatrace-kubernetes-monitoring ServiceAccount has been removed and replaced by the dynatrace-activegate ServiceAccount.dynatrace-activegate ServiceAccount during Operator installation. For details, see the ClusterRole aggregation documentation.dynatrace-kubernetes-monitoring ClusterRole uses Kubernetes ClusterRole aggregation to assign the required permissions to the ServiceAccount during Operator installation.rbac.kspm.create: true now requires rbac.activeGate.create: true and rbac.kubernetesMonitoring.create: true. Be sure to adjust your Helm values if applicable before upgrading..spec.templates.otelCollector.imageRef is now mandatory when telemetry ingest is enabled.Handling removed API versions during Dyntrace Operator upgrade
Kubernetes records CRD versions in .status.storedVersions, but doesn’t remove entries when versions are deleted, so old versions accumulate and can block upgrades.
In Dynatrace Operator 1.8.0 API versions v1beta1 and v1beta2 are removed from the Dynakube CRD. If you have been using Dynatrace operator < version 1.4.0 those versiosn are stored in the CRD in .status.storedVersions and require a clean up otherwise the upgrade to the new API version v1beta6 will fail. With Dynatrace Operator version 1.7.3 we introduced a two-step solution to ensure smooth upgrades.
Helm hook for CRD cleanup
Before the actual upgrade, a Helm hook starts a Kubernetes Job that erases obsolete API versions from the DynaKube CRD's status.storedVersions field. The job only keeps the latest API version that is available in the Dynatrace Operator version installed before the upgrade. This step ensures a smooth upgrade to the new CRD during the Helm upgrade process. See Dynatrace Operator security for information regarding required permissions.
Dynatrace Operator startup migration
After the upgrade was successful and the new Dynatrace Operator version 1.8.0 is operational, it migrates all existing DynaKubes to the latest supported API version v1beta6. After the DynaKube migration, the status.storedVersions field in the DynaKube CRD is updated to hold only the latest API version v1beta6 to ensure consistency.
If you have used Dynatrace Operator version <= 1.2 upgrading to Dynatrace Operator version 1.7.3 is mandatory as an intermediate step before upgrading to later releases to ensures a smooth and reliable transition. As a general rule, skipping versions is not recommended.
Helm based installation
When using Helm to install or upgrade from a Dynatrace Operator version >=1.3.0, no further action is required on your part. The required adjustments will be automatically handled by a Helm pre-upgrade hook during the upgrade progress.
Alternative installation methods
If you are relying on one of the alternative deployment methods listed below upgrading to Dynatrace Operator version 1.7.3 is mandatory as an intermediate step before upgrading to later releases to ensures a smooth and reliable transition.
Manual approach
Instead of upgrading to version 1.7.3, you can manually perform the required adjustments to the CRD
Run the following command to list stored versions in the CRD of your cluster.
kubectl -n dynatrace get crd dynakubes.dynatrace.com -o jsonpath='{.status.storedVersions}'
Continue the procedure if v1beta1 or v1beta2 are listed in the stored versions of the CRD in your cluster.
Identify the currently active version:
storage_version=$(kubectl get customresourcedefinitions dynakubes.dynatrace.com -o jsonpath='{.spec.versions[?(@.storage==true)].name}')
Convert all DynaKubes to the active version:
kubectl get dynakube -n dynatrace -o yaml | kubectl apply -f -
Remove all previous versions while keeping the active version:
kubectl patch customresourcedefinitions dynakubes.dynatrace.com --subresource='status' --type='merge' -p "{\"status\":{\"storedVersions\":[\"${storage_version}\"]}}"
Ensuring that the CRD's .status.storedVersions field is properly cleaned up is crucial to avoid issues with future upgrades.
ArgoCD may display resources that are still using an old API version as "out-of-sync".
codeModulesImage if the Dyntrace Operator CSI driver is disabled and neither applicationMonitoring nor cloudNativeFullStack is used. When attempting to do so, a validation error will be raised during DynaKube deployment or update.spec.templates.kspmNodeConfigurationCollector.nodeAffinity field is now correctly applied to the KSPM Node Configuration Collector DaemonSet.privileged SCC on OpenShift, eliminating the need for additional SCC configuration.k8s.cluster.name attribute used for telemetry ingest now considers custom cluster names instead of only the DynaKube name.subPath is no longer used for hostPath volumes to improve compatibility across Kubernetes distributions (including RKE):
subPath value is now appended at the end of the hostPath.builtin:app-transition.kubernetes tenant settings object to support newer tenants that no longer have this object.spec.extensions is enabledspec.kspm is enabledspec.telemetryIngest is enabled.metadataEnrichment and classicFullstack. Now metadataEnrichment will work with classicFullstack. For more information, see Dynatrace Operator support and known issues.Support for Kubernetes 1.27 ended in July 2025. As a result, Dynatrace Operator 1.8.0+ will no longer support this version.
The dynatrace-kubernetes-monitoring ServiceAccount will no longer exist. Instead, an aggregate ClusterRole is now bound to the dynatrace-activegate ServiceAccount, which will be used for Kubernetes monitoring permissions from this version onward. For more information, see the ClusterRole aggregation documentation.
If you have participated in the Kubernetes Enhanced Object Visibility Preview and also unlocked monitoring of the sensitive Kubernetes objects ConfigMaps and Secrets, we recommend the following cleanup step, after upgrading to Operator 1.8.0:
kubectl delete ClusterRoleBinding/dynatrace-kubernetes-monitoring-sensitive
Details: Dynatrace Operator 1.8.0 uses aggregationRules to merge permissions from different ClusterRoles. This makess the ClusterRoleBinding dynatrace-kubernetes-monitoring-sensitive obsolete, which can safely be deleted after upgrading to Operator 1.8.0.
The Helm repository located in dynatrace/helm-charts is deprecated and will stop receiving updates in a future release! If you are still using it,
please update the URL to dynatrace/dynatrace-operator or switch to the OCI registry-based approach. Update the Helm repository URL with the following commands:
helm repo remove dynatracehelm repo add dynatrace https://raw.githubusercontent.com/Dynatrace/dynatrace-operator/main/config/helm/repos/stable
/usr/local/bin/csi-node-driver-registrar and /usr/local/bin/livenessprobe, along with the Helm chart flags csidriver.registrar.builtIn and csidriver.livenessprobe.builtIn have been removed (deprecated in v1.7.0). The CSI driver pods will now always use the built-in implementations of livenessprobe and CSI node driver registrar. Customers who previously set these flags to false may observe a change in behavior. The purpose of this change is to minimize vulnerabilities in the Dynatrace Operator by ensuring prompt updates of these components.autoUpdate field has been removed. Automatic updates now follow your tenant's configured target version. To disable automatic updates, set either the version or image field in the DynaKube CR.Specifying an image in .spec.templates.otelCollector.imageRef is now mandatory when telemetry ingest is enabled.
Deprecated DynaKube API versions v1beta1 and v1beta2 have been removed from the DynaKube CRD schema.
DynaKube API version v1beta3 is no longer served and will be removed in a future Dynatrace Operator release. See: Migration guide for DynaKube API versions
Upgrading Dynatrace Operator may restart the ActiveGate, the OneAgent DaemonSet (host agent), and Log Monitoring DaemonSet.
If you are monitoring Kubernetes through the public Kubernetes API from within an in-cluster ActiveGate, you will need to recreate the bearer token because the name of the used ServiceAccount changed from dynatrace-kubernetes-monitoring to dynatrace-activegate. Follow the instructions at Connect to the public Kubernetes API.
Due to the aformentioned changes to the ActiveGate RBAC objects, setting rbac.kspm.create: true now requires rbac.activeGate.create: true and rbac.kubernetesMonitoring.create: true. Be sure to adjust your Helm values if applicable before upgrading.