You can use Cost Allocation to allocate your Dynatrace DPS usage on Kubernetes deployments to the cost centers and products that you define. Allocating costs in this way provides a transparent and detailed account of the Dynatrace expenditures that originate from each cost center, which in turn helps your organization optimize its budgets.
When you base your Cost Allocation on Kubernetes namespaces, which is the recommended method, you can allocate the following data.
Data from Kubernetes deployments with container-based application-only Full-Stack Monitoring, see Calculate your consumption of Full-Stack Monitoring.
Telemetry data from all Kubernetes deployments, such as:
No matter your use case or tag strategy, you can easily implement Cost Allocation for Kubernetes.
On this page, you'll learn how to set up Cost Allocation in Kubernetes-based deployments.
We recommend setting up Cost Allocation along organizational lines and deployment scopes. Suitable concepts include Kubernetes clusters and Kubernetes namespaces. These attributes are typically available for all the telemetry data that you ingest.
Generally, to add Cost Allocation information to your Kubernetes data, you'll enrich your signals with the dt.cost.costcenter and dt.cost.product Grail attributes.
You can enrich all telemetry data that originates from Kubernetes sources, whether logs, spans, metrics, or events.
Dynatrace automatically propagates these dt.cost.* attributes from spans to the service entity and any resulting service metrics.
For more information, see Which data will be enriched.
If you only need to allocate costs at the deployment level of granularity (clusters or namespaces), we recommend to set up Cost Allocation in OpenPipeline. This allows you to set up Cost Allocation with existing organizational lines or deployment scopes.
First, identify the primary Grail fields that you want to base your allocation on.
For Kubernetes, these could be k8s.cluster.name or k8s.namespace.name.
Next, use OpenPipeline to map these to dt.cost.costcenter or dt.cost.product.
OpenPipeline has a specific Cost allocation stage where you can do this, which is available at
Settings > Process and contextualize > OpenPipeline > Logs > Pipelines > [Select a pipeline] > Cost allocation.
For more information, see Processing.
For more information about primary Grail fields, see Global field reference.
Suppose that setting Cost Allocation based on the Kubernetes cluster or namespace is not sufficient. This could be because you have already defined your own cost centers and products outside of Dynatrace and defined them, for example, as Kubernetes labels or annotations. In that case, Dynatrace allows you to set up more fine-grained allocation: you can leverage your own Kubernetes namespace labels or annotations as the source for your Cost Allocation metadata in Dynatrace.
The metadata you create can represent your own security architecture, and can even be hierarchical by encoding it into a string such as department-A/department-AB/team-C.
To allocate costs in line with your existing strategy for Kubernetes tags, there are different ways that you can map your labels or annotations to dt.cost.costcenter and dt.cost.product.
You can either use metadata enrichment, DynaKube custom resources, or dedicated pod annotations, depending on your needs and deployment setup.
Recommended
Using existing namespace labels or annotations is helpful for:
This method allows you to use existing namespace labels and annotations as sources for your Cost Allocation metadata.
To set up metadata enrichment, select an existing label and set the Target to dt.cost.costcenter or dt.cost.product as appropriate.
That label will then be added as dt.cost.costcenter or dt.cost.product in your telemetry.
For Kubernetes-based deployments, make sure that you have activated metadata enrichment for Dynatrace Operator. To learn more about metadata enrichment, see Metadata enrichment of all telemetry originating from Kubernetes.
Will be available in Dynatrace version 1.330+.
If you have Kubernetes clusters with cluster-specific rulesets, you can configure metadata enrichment at the Kubernetes cluster level instead.
To do this, go to
Settings > Go to entity > [Select your cluster] > Collect and capture > Cloud and virtualization > Kubernetes Telemetry Enrichment.
DynaKube custom resources are helpful when you have Kubernetes Platform Monitoring with Full-Stack Observability. You can allocate the cost of memory consumption for the whole node (billed as GiB-hours).
To set up Cost Allocation with DynaKube custom resources, pass the configuration via the args parameter in the DynaKube parameters for Dynatrace Operator, as shown in the example below.
spec:apiUrl: https://<environment-id>.live.dynatrace.com/apioneAgent:cloudNativeFullStack:args:- --set-host-tag=dt.cost.costcenter=it_services- --set-host-tag=dt.cost.product=fin_app
Dedicated pod annotations are useful for:
However, you should use this method only in scenarios where you can't use namespace labels or annotations as a source. Unlike the settings-based approach, manually added pod annotations don't provide complete enrichment. They don't enrich Kubernetes metrics, Kubernetes events, Kubernetes Smartscape entities, or Prometheus metrics.
The code block below shows how to use dedicated pod annotations to map Kubernetes tags to Cost Allocation attributes.
metadata:annotations:metadata.dynatrace.com/dt.cost.costcenter: it_servicesmetadata.dynatrace.com/dt.cost.product: fin_app
If you have a more sophisticated strategy for cost center tags, it might not be enough to map Kubernetes labels/tags directly to dt.cost.costcenter or dt.cost.product.
In this case, we recommend that you use metadata enrichment to enrich all telemetry data with the Kubernetes labels or annotations that you need to define a cost center or product. Then, use OpenPipeline to derive cost centers or products based on those labels or annotations. Adding new attributes is described in OpenPipeline processing examples.