Dynatrace version 1.264+
Containerized, auto-scalable private Synthetic locations on Kubernetes and its commercial distribution OpenShift are an alternative to deploying Synthetic-enabled ActiveGates on separate hosts or virtual machines and then assigning them to private locations for the execution of synthetic monitors.
Unlike individual Synthetic-enabled ActiveGates that are deployed and assigned to private locations (and then tracked via utilization metrics) containerized locations are deployed as a whole, with a minimum and maximum number of ActiveGates as the necessary input parameters.
Kubernetes and OpenShift aren't just additional supported ActiveGate platforms along with Windows and Linux; with this offering, containerized private Synthetic locations:
You can execute scheduled as well as on-demand executions of all types of synthetic monitors on containerized locations.
You can manage Kubernetes/OpenShift locations via the Dynatrace web UI and the existing Synthetic - Locations, nodes, and configuration API v2. Additional Early Adopter endpoints in this API facilitate the deployment of Kubernetes locations; the new endpoints help you generate the commands that need to be executed on the Kubernetes cluster.
Containerized private Synthetic locations are deployed as a whole.
Each location has multiple Synthetic-enabled ActiveGates configured as pods. You specify a minimum and maximum number of ActiveGates when setting up a location.
The StatefulSet is considered as the location.
You can have one or more locations per namespace. See Requirements and Recommendations and caveats below.
You can have one or more auto-scalable locations per Kubernetes cluster.
Locations are scaled automatically by adjusting the number of ActiveGates per location by the following additional parts of the containerized location architecture.
The Synthetic metric adapter requests and receives utilization metrics for the containerized ActiveGates from the Dynatrace Cluster.
There is one Synthetic metric adapter per Kubernetes cluster.
The metric adapter is configured to communicate with a single Dynatrace environment.
Installing a Synthetic metric adapter requires super-user roles in Kubernetes—see Install a containerized location below.
The horizontal pod auto-scaler scales a location by adjusting the number of ActiveGates based on the utilization data it receives from the Synthetic metric adapter.
There is one horizontal pod auto-scaler per location.
Containerized private Synthetic locations are supported with Dynatrace version 1.264+ on Kubernetes 1.22–1.25 with persistent volume and kubectl
support.
Internet connectivity is required to access the public repositories where Docker images for the Synthetic-enabled ActiveGate and Synthetic metric adapter are available. These image locations are referenced in the respective template files—see Install a containerized location and Update a containerized location below.
The ActiveGate hardware requirements below are listed by size.
CPU and RAM requests refer to the resources reserved by pods upon creation.
CPU and RAM limits refer to the maximum resource consumption per pod.
If the location is monitored by OneAgent or another deep monitoring solution, memory requirements will increase.
ActiveGates
Locations
We recommend installing each location in its own namespace.
If deploying more than one location per namespace, use different names for the respective ActiveGate resources—see Install a containerized location below.
Locations that share a single Kubernetes namespace must be connected to the same Dynatrace environment as the Synthetic metric adapter in order to be auto-scalable. For example, assume that Location A and the metric adapter are configured for Environment X. However, Location A shares a namespace with Location B, which is configured for Environment Y. In such a case, Location A is auto-scalable; Location B is not auto-scalable.
If you want to install a location in the same namespace as other Dynatrace resources such as Dynatrace Operator, be aware of the more demanding hardware and system requirements for containerized Synthetic-enabled ActiveGates.
Synthetic metric adapter
For auto-scaling purposes, the Synthetic metric adapter needs access to and extends the Kubernetes API by specifying a new API service—v1beta1.external.metrics.k8s.io
.
This API service is defined in the Synthetic metric adapter template—see #install below.
apiVersion: apiregistration.k8s.io/v1kind: APIServicemetadata:name: v1beta1.external.metrics.k8s.iospec:service:name: dynatrace-metrics-apiservernamespace: {{adapterNamespace}}group: external.metrics.k8s.ioversion: v1beta1insecureSkipTLSVerify: truegroupPriorityMinimum: 100versionPriority: 100
Note that any other metric adapter, such as a Prometheus adapter, in the Kubernetes cluster might also use this service.
The Synthetic metric adapter also modifies an existing resource in its template—the horizontal-pod-autoscaler
ServiceAccount in kube-system
namespace.
apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: hpa-controller-dynatrace-metricsroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: dynatrace-metrics-server-resourcessubjects:- kind: ServiceAccountname: horizontal-pod-autoscalernamespace: kube-system
Set up and manage a containerized location in the Dynatrace web UI at Settings > Web and Mobile monitoring > Private Synthetic locations.
Initial setup for a Kubernetes/OpenShift location
Deploy the location
Deploy the Synthetic metric adapter
Select Add Kubernetes location or Add OpenShift location on the Private Synthetic locations page.
Provide a Location name of your choice.
Select a Geographic location, for example, San Francisco, California, United States
. (Note that you cannot Save changes until you've specified a name and location.)
In the ActiveGates section:
XS
, S
, or M
). See also Requirements and Best practices and caveats.Kubernetes only If your Kubernetes implementation is based on a later release than 1.21–1.25, turn on Use Kubernetes version 1.26+. See also Requirements and Best practices and caveats.
If you change this setting after downloading the location template, you need to repeat the deployment procedure.
optional Turn on problem generation if all ActiveGates go offline.
Save changes before proceeding with deploying the location and metric adapter.
Your named location is displayed on the Private Synthetic locations page with the Kubernetes or OpenShift logo. Note that no ActiveGates are assigned to the location at this point.
Select your location in Private Synthetic locations to download the location template and generate the commands that need to be executed on the Kubernetes cluster.
In the Deployment section, create a PaaS token (Create token) or paste an existing token. The PaaS token is required to generate ActiveGate connection tokens for communication with your Dynatrace environment.
Existing tokens are listed on the Access tokens page. Note that a PaaS token is only displayed once upon creation, after which it's stored encrypted and can't be revealed. We recommend storing your PaaS token in a password manager so that you can reuse it for creating additional private locations within your Kubernetes cluster.
Provide an ActiveGate name or use the default. This name is used as the prefix for ActiveGates deployed as part of the location. The first ActiveGate is named <prefix>-0
, the second ActiveGate <prefix>-1
, and so on. This name is also used as the StatefulSet name.
Provide a Location namespace name or use the default. (Leave the Metric adapter namespace as is. This field is only necessary for generating the template for the Synthetic metric adapter.)
The Download synthetic.yaml button is enabled after you provide a PaaS token, ActiveGate name, and location namespace name.
The values of fields in the Deployment section are not persistent. If you navigate away from the page, you need to re-enter the values.
Select Download synthetic.yaml. This is the location template file. You can rename the file to match your location for easy identification.
Copy the downloaded location template over to your Kubernetes cluster.
Copy and execute the generated commands on your Kubernetes cluster. Your PaaS token is automatically appended to the commands displayed.
Execute the commands from the same location as the template file.
If you've renamed the template file, use the new filename in the commands.
optional Run the following command to list all pods in a given namespace (dynatrace
in the sample below) and verify their deployment.
kubectl get pod -n dynatrace
You can also view pods on the Deployment Status > ActiveGates page by filtering with the key-value pairs Running in container: True
and With modules: Synthetic
.
This procedure generates a separate template for the Synthetic metric adapter. You then execute generated commands on your Kubernetes cluster to deploy the metric adapter.
In the Deployment section, create a Metrics token (Create token) or paste an existing token. The metric token is an access token for fetching utilization data from Dynatrace. Existing tokens are listed on the Access tokens page.
Provide a Metric adapter namespace name or use the default. (Leave the Location namespace and ActiveGate name as is. These fields are only necessary for generating the template for the location.)
The Download synthetic-adapter.yaml button is enabled after you provide a metric token and metric adapter namespace name.
The values of fields in the Deployment section are not persistent. If you navigate away from the page, you need to re-enter the values.
Select Download synthetic-adapter.yaml. This is the template file for the Synthetic metric adapter.
Copy the downloaded metric adapter template over to your Kubernetes cluster.
Copy and execute the generated commands on your Kubernetes cluster. Your metric token is automatically appended to the commands displayed.
Execute the commands from the same location as the template file.
If you've renamed the template file, use the new filename in the commands.
Any updates to a location require that you download the location template file again and apply the changes via kubectl.
To update ActiveGate versions
Download the location template file.
Execute the following command to apply the changes on your Kubernetes cluster. Be sure to use your location template filename in place of synthetic.yaml
. Execute this command from the same location as the template file.
kubectl apply -f ./synthetic.yaml
Any update redeploys ActiveGates in the reverse order of their deployment. For example, if your location contains the ActiveGates activegate-name-0
and activegate-name-1
, activegate-name-1
is stopped and redeployed first.
The redeployed ActiveGate pod uses the same persistent volume deployed for log continuity.
The commands generated when deploying a location and the Synthetic metric adapter also include code snippets for deleting them on Kubernetes. You may copy and store these commands for future reference.
At any point, you can regenerate the commands for the respective namespaces.
Location
Select your location in Private Synthetic locations.
Re-enter the Paas token, ActiveGate name, and Location namespace name.
Copy and use the location Cleanup commands.
Note that this procedure only deletes Kubernetes resources; it doesn't delete the location you initially set up in Dynatrace.
Synthetic metric adapter
Select your location in Private Synthetic locations.
Re-enter the Metrics token and Metric adapter namespace name.
Copy and use the metric adapter Cleanup command.
If the Synthetic metric adapter is deleted or stops working, horizontal pod auto-scalers can no longer receive utilization data from Dynatrace, and your containerized locations become non-scalable.
Using a multi-availability zone (multi-AZ) cluster with deployments utilizing PVC can result in pods being stuck in a pending state upon recreation. This happens because the storage volumes like EBS are not replicated between zones.
PVC is only shared between nodes located in the same availability zone. When you use a multi-AZ cluster and a node tries to access PVC from a different availability zone, it will become stuck in pending state and display an error message.
Currently, there are two possible solutions for multi-AZ Kubernetes deployments:
You can configure node affinities to use only specific zones for a deployment.
To set node affinity
kubectl get nodes --show-labels
failure-domain.beta.kubernetes.io/zone
, for example, failure-domain.beta.kubernetes.io/zone=us-east-1a
.kubectl label nodes node name label=value
kubectl label nodes ip-10-179-202-73.ec2.internal zone=us-east-1a
nodeSelector
section of the Synthetic deployment template. For example:
spec:nodeSelector:zone: us-east-1a
Nodes with the same zone label will be deployed in the same availability zone and you'll be able to share PVC between them without causing an error.
Each cloud service provides its own shared storage systems options. To explain how to use shared storage systems, we will use AWS EFS as the example. For information about storage systems used by other cloud storage providers, see:
We assume that you already have EFS that you can use. If you don't, see Getting started with Amazon EFS to learn how to set up EFS.
Be aware that EFS may be more expensive than EBS. Check pricing.
To use storage class with EFS
kind: StorageClassapiVersion: storage.k8s.io/v1metadata:name: efs-testprovisioner: efs.csi.aws.comparameters:fileSystemId: fs-0c155dcd8425aa39dprovisioningMode: efs-apdirectoryPerms: "700"basePath: "/"
volumeClaimTemplates:- metadata:name: persistent-storagespec:storageClassName: efs-sc-test-jsuaccessModes:- ReadWriteManyresources:requests:storage: 3Gi
Now, if the pod is redeployed on a node in a different zone, the PVC should be automatically bound to the new deployment zone.
Network availability monitors are supported on containerized Synthetic-enabled ActiveGate deployments, but additional permissions are required for ICMP tests.
To enable ICMP request type for NAM execution
ping
executable, which requires the CAP_NET_RAW
capability set for the container executing the requests (synthetic-vuc
).allowPrivilegeEscalation
property of securityContext
for this container has to be set to true
, because the process that launches the ping
executable doesn't have the required privileges set by default.The entire securityContext
for the synthetic-vuc
container with enabled network availability monitors should look as follows.
securityContext:readOnlyRootFilesystem: trueprivileged: falseallowPrivilegeEscalation: truerunAsNonRoot: truecapabilities:drop: ["all"]add: ["NET_RAW"]
OpenShift uses Security Context Constraint for limiting capabilities used by the pods.
By default, deployed pods will use the restricted-v2
SCC, which does not allow any additional capabilities.
The recommended solution is to prepare a custom Security Context Constraint.
Create a dedicated Service Account optional
oc -n $NAMESPACE create sa sa-dt-syntheticoc -n $NAMESPACE adm policy add-role-to-user edit system:serviceaccount:$NAMESPACE:sa-dt-synthetic
Create a custom Security Context Constraint
scc-dt-synthetic.yaml
apiVersion: security.openshift.io/v1kind: SecurityContextConstraintsmetadata:name: scc-dt-syntheticallowPrivilegedContainer: falseallowHostDirVolumePlugin: falseallowHostIPC: falseallowHostNetwork: falseallowHostPID: falseallowHostPorts: falserunAsUser:type: MustRunAsRangeseLinuxContext:type: MustRunAsfsGroup:type: MustRunAssupplementalGroups:type: MustRunAsvolumes:- configMap- downwardAPI- emptyDir- persistentVolumeClaim- projected- secretusers: []groups: []priority: nullreadOnlyRootFilesystem: truerequiredDropCapabilities:- ALLdefaultAddCapabilities: nullallowedCapabilities:- NET_RAWallowPrivilegeEscalation: true
priority
can be set to any number between 1 and 9. If there are two or more SCCs that fulfill the requirements, the one with higher priority is selected.
oc create -f scc-dt-synthetic.yaml
Add the new SCC to the Service Account used for synthetic deployment
oc -n $NAMESPACE adm policy add-scc-to-user scc-dt-synthetic system:serviceaccount:$NAMESPACE:default
sa-dt-synthetic
SA was created, substitute it in place of default
.
oc -n $NAMESPACE adm policy add-scc-to-user scc-dt-synthetic system:serviceaccount:$NAMESPACE:sa-dt-synthetic
If the OpenShift cluster is deployed as an Azure Red Hat OpenShift (ARO) resource, by default, the Network Security Group won't allow ICMP traffic outside the cluster.
The AROs Network Security Group is not modifiable, but a custom NSG can be created and imported during the ARO cluster creation. To learn more about it, see Bring your own Network Security Group (NSG) to an Azure Red Hat OpenShift (ARO) cluster.
Running the cluster with default settings will only allow for using ICMP NAM monitors for resources inside the OpenShift cluster. Any requests going outside the cluster will fail.
Add the following code at the top of your location template file to insert a ConfigMap resource containing your proxy server information.
In the code sample below:
namespace: dynatrace
) must be the location namespace.kind: ConfigMapapiVersion: v1data:custom.properties: |-[http.client]proxy-server = 10.102.43.210proxy-port = 3128proxy-user = proxyuserproxy-password = proxypassmetadata:name: ag-custom-configmapnamespace: dynatrace---
Add the following code at template.volumes
.
- name: ag-custom-volumeconfigMap:name: ag-custom-configmapitems:- key: custom.propertiespath: custom.properties
Add the following code to the ActiveGate container configuration under volumeMounts:
.
- name: ag-custom-volumemountPath: /var/lib/dynatrace/gateway/config_template/custom.propertiessubPath: custom.properties
Add the following code to Synthetic metric adapter template under env:
- name: HTTPS_PROXYvalue: "http://proxyuser:proxypass@10.102.43.210:3128"- name: NO_PROXYvalue: "172.20.0.0/16" # do not proxy internal calls to Kubernetes cluster
For more details about these environment variables see Go httpproxy package documentation.
The way of obtaining Service CIDR depends on Kubernetes distribution, for example for AWS EKS the following command can be used:
aws eks describe-cluster --name my-cluster --query 'cluster.kubernetesNetworkConfig'
Auto-scalable locations become non-scalable for any of the following reasons.
The location reaches its maximum number of pods in the StatefulSet, and location utilization is over the threshold of 80%. No new ActiveGate pods are created until the maximum number of ActiveGates is increased.
The Synthetic metric adapter stops working, and the location horizontal pod auto-scalers don't receive the metrics required for auto-scaling.
You can run the following command to verify the state of a pod auto-scaler. In the example below, dynatrace
is the location namespace.
kubectl describe hpa -n dynatrace
If ScalingActive
is set to False
in the output, the auto-scaler isn't receiving metric data.
You can automate the deployment of and manage containerized locations via the existing Synthetic - Locations, nodes, and configuration API v2. Early Adopter endpoints added to this API to facilitate the deployment of Kubernetes locations. The new endpoints help generate the commands you need to execute on the Kubernetes cluster.
/synthetic/locations/{LocationId}/yaml
) fetches the location template file based on the location ID of the location you initially set up for containerized deployment.synthetic/locations/commands/apply
) fetches the list of commands to deploy a location on Kubernetes/Openshift.synthetic/locations/{LocationId}/commands/delete
)fetches the commands to delete a containerized location.