Cloud application and workload detection
Dynatrace allows automatic detection of cloud applications and workloads in your Cloud Foundry and Kubernetes environments. Cloud applications and workloads group similar processes into process groups and services and thus allow for version analysis.
Cloud application and workload detection provides
- Insights into your Kubernetes namespaces, workloads, and pods
Container resource metrics for Kubernetes and Cloud Foundry containers
Version detection for services that run in Kubernetes workloads
Starting with Dynatrace version 1.258 and OneAgent version 1.257
-
This feature is enabled by default.
-
You can configure Cloud application and workload detection independently for Kubernetes, Cloud Foundry, and plain Docker environments:
- In the Dynatrace menu, go to Settings.
- Select Processes and containers > Cloud application and workload detection.
- Turn on/off Enable cloud application and workload detection […] for Cloud Foundry, Docker, or Kubernetes/OpenShift as needed.
- Select Save changes.
See below for how to use Kubernetes workload properties for grouping processes of similar workloads. In addition, generic process detection rules still apply, while ignoring container or platform specific properties.
Workload detection rules for Kubernetes
By default, Dynatrace separates process groups and services for every Kubernetes workload.
You can define rules to support your release strategies (such as blue/green, canary) by using workload properties like Namespace name, Base pod name, or Container name, as well as the environment variables DT_RELEASE_STAGE
and DT_RELEASE_PRODUCT
for grouping processes of similar workloads. You can also specify the version and build version of the deployed workload by setting the DT_RELEASE_VERSION
and DT_RELEASE_BUILD_VERSION
environment variables. This gives you extended visibility into the impact of releases on your services.
The rules are scoped for Kubernetes namespaces so that you can easily migrate your existing environment namespace by namespace. The first applicable rule in the list is applied. If no rule matches, the combination of Namespace name, Base pod name, and Container name is used as a fallback.
To create a rule
- In the Dynatrace menu, go to Settings.
- Select Processes and containers > Cloud application and workload detection.
- Select Add rule.
Use a combination of the five input variables below to calculate how process groups are detected.
-
Namespace name
-
Base pod name (e.g. "paymentservice-" for "paymentservice-5ff6dbff57-gszgq")
-
Container name (as defined in the pod spec)
-
Stage (
DT_RELEASE_STAGE
) -
Product (
DT_RELEASE_PRODUCT
)If Product is enabled and has no value, it defaults to Base pod name.
-
Set Match operator and Namespace name to define namespaces to which you want this rule to apply.
-
Select Save changes.
Changing the default rules may lead to the creation of new IDs for process groups and services, and the loss of custom configurations on existing process groups.
Once your process groups or services gain new IDs, be aware that:
Any custom configurations won't be transitioned to the new process group or service. This includes names, monitoring settings, and error and anomaly detection settings.
Any existing custom alerts, custom charts on dashboards, or filters that are mapped to the overwritten IDs of individual process groups or services will no longer work or display monitoring data.
Any integrations with the Dynatrace API where specific process groups or services are queried will need to be updated.
Impact on the default process group naming
OneAgent versions 1.241+
Dynatrace aims to provide intuitive names for process groups that make sense for DevOps teams. Creating rules for Kubernetes workloads also affects the composition of meaningful default names for process groups. The default pattern for {ProcessGroup:DetectedName}
is as follows: “<tech_prefix> <product> <STAGE> <base_pod_name>”
, where
<tech_prefix>
refers to names from detected technologies, such as the name of a technology, executable, path, and startup class.<product>
,<STAGE>
,<base_pod_name>
are optional variables and only used when included in the rule that has been applied for the respective Kubernetes namespace.
Example: "index.js emailservice PROD"
, where
index.js
is<tech_prefix>
emailservice
is<product>
PROD
is<STAGE>
<base_pod_name>
isn't used in this case, so it's not included in {ProcessGroup:DetectedName}
.
If the default process group naming is too generic or doesn't reflect your naming standards, you can always customize it by creating process group naming rules.
Example - best practices with Kubernetes labels
We recommend that you propagate Kubernetes labels to environment variables in the deployment configuration (see Version detection with Kubernetes labels) using the Downward API:
-
app.kubernetes.io/version
->DT_RELEASE_VERSION
-
app.kubernetes.io/name
->DT_RELEASE_PRODUCT
-
app.kubernets.io/stage
->DT_RELEASE_STAGE
1apiVersion: apps/v12kind: Deployment3metadata:4 name: emaildeploy5spec:6 selector:7 matchLabels:8 app: emailservice9 template:10 metadata:11 annotations:12 metrics.dynatrace.com/path: /stats/prometheus13 metrics.dynatrace.com/port: "15090"14 metrics.dynatrace.com/scrape: "true"15 labels:16 app: emailservice17 app.kubernetes.io/version: 0.3.618 app.kubernetes.io/stage: production19 app.kubernetes.io/name: emailservice20 app.kubernetes.io/part-of: online-boutique21 spec:22 serviceAccountName: default23 terminationGracePeriodSeconds: 524 containers:25 - name: server26 image: gcr.io/google-samples/microservices-demo/emailservice:v0.3.627 ports:28 - containerPort: 808029 env:30 - name: PORT31 value: "8080"32 - name: DT_RELEASE_VERSION33 valueFrom:34 fieldRef:35 fieldPath: metadata.labels['app.kubernetes.io/version']36 - name: DT_RELEASE_PRODUCT37 valueFrom:38 fieldRef:39 fieldPath: metadata.labels['app.kubernetes.io/name']40 - name: DT_RELEASE_STAGE41 valueFrom:42 fieldRef:43 fieldPath: metadata.labels['app.kubernetes.io/stage']44 - name: DISABLE_TRACING45 value: "1"46 - name: DISABLE_PROFILER47 value: "1"48 readinessProbe:49 periodSeconds: 550 exec:51 command: ["/bin/grpc_health_probe", "-addr=:8080"]52 livenessProbe:53 periodSeconds: 554 exec:55 command: ["/bin/grpc_health_probe", "-addr=:8080"]56 resources:57 requests:58 cpu: 100m59 memory: 64Mi60 limits:61 cpu: 200m62 memory: 128Mi63---
As a next step, this configuration can then be easily leveraged with one rule - Namespace exists.
As an outcome of the applied configuration, Dynatrace will merge similar processes and services with the same Product, Stage, Container name, and Namespace values. Because all these values are identical, this rule also makes workloads merge across different Kubernetes clusters into the same process group.