Dynatrace SaaS only
As of May 22, 2024, the Early Adopter release phase for custom Davis AI events on Grail ends, and the licensing of all event types within the product category Events powered by Grail will be harmonized under the platform pricing model described below. This means you'll be billed for ingesting, retaining, and querying event data via DQL, as is already the case for business events. In addition, with Davis AI problems & events, Kubernetes events, security events, and others to come, even more event types will be added to Events powered by Grail.
While there are no additional costs or licensing involved in the default monitoring and reporting of built-in event types via OneAgent or cloud integrations, you have the option to configure custom events and/or event-ingestion channels. Such event-related customizations generate additional consumption because they require significantly more processing and analytical power than built-in event ingestion via OneAgent or cloud integrations.
The unit of measure for consumed events is the data volume of ingested events in gibibytes (GiB).
The overall consumption model for Events powered by Grail is based on three dimensions of data usage (Ingest & Process, Retain, and Query).
Ingest & Process
Retain
Query
Definition
Ingested data is the amount of raw data in bytes sent to Dynatrace before enrichment and transformation.
Retained data is the amount of data saved to storage after data parsing, enrichment, transformation, and filtering but before compression.
Queried data is the data read during the execution of a DQL query.
Unit of measure
per gibibyte (GiB)
per gibibyte-day (GiB-day)
per gibibyte scanned (GiB scanned)
Custom created/ingested or subscribed events that might be configured for an environment include:
Usage of different kinds of events affects your consumption differently, as outlined in the table below.
Usage recorded for business events, custom Davis AI and Kubernetes events, Security events, and other custom generic events (in particular event data stored in custom buckets using the Storage Management app) results in consumption for Ingest & Process, Retain, and Query. The following specifics apply for:
Here's what's included with the Ingest & Process data-usage dimension:
Concept
Explanation
Data delivery
Delivery of events via OneAgent, RUM JavaScript, or Generic Event Ingestion API (via ActiveGate)
Topology enrichment
Enrichment of events with data source and topology metadata
Data transformation
Data-retention control
Manage data retention periods of incoming events based on bucket assignment rules
Conversion to timeseries
Create metrics from event attributes (note that creating custom metrics generates additional consumption beyond the consumption for ingestion and processing.)
Apply the following calculation to determine your consumption for the Ingest & Process data-usage dimension:
(number of gibibytes ingested) × (gibibyte price as per your rate card) = consumption in your local currency
Be aware that data enrichment and processing can increase your data volume typically by 0.5-1.0 kB per event. Depending on the source of the data, and the attributes and metadata added during processing, the total data volume after processing can increase by a factor of 1.5 or more. Processing can also be used to drop unwanted event attributes to reduce retained data volume.
Here's what's included with the Retain data-usage dimension:
Concept
Explanation
Data availability
Retained data is accessible for analysis and querying until the end of the retention period.
Retention periods
Choose a retention period
Apply the following calculation to determine your consumption for the Retain data-usage dimension:
(number of GiB of processed data ingested per day) × (retention period in days) × (GiB-day price as per your rate card) × (number of days that data is stored) = consumption in your local currency
Retention period in days is based on the retention-period of the storage bucket under analysis. (For example, 35 days if you're analyzing the default_logs
bucket.)
Number of days data is stored reflects the period during which the data is stored. (For example, 30 days if you're analyzing the monthly cost, or 365 days for a full year.)
Query data usage occurs when:
Here's what's included with the Query data-usage dimension:
Concept
Explanation
On-read parsing
Aggregation
Perform aggregation, summarization, or statistical analysis of data in events across specific timeframes or time patterns (for example, data occurrences in 30-second or 10-minute intervals), mathematical, or logical functions.
Reporting
Create reports or summaries with customized fields (columns) by adding, modifying, or dropping existing event attributes.
Context
Use DQL to analyze event data in context with relevant data on the Dynatrace platform, for example, user sessions or distributed traces.
Apply the following calculation to determine your consumption for the Query data-usage dimension:
(number of GiB of uncompressed data read during query execution) × (GiB scanned price as per your rate card) = consumption in your local currency
The following example calculations show how each data-usage dimension contributes to overall usage and consumption.
Let's assume that you ingest 5 GiB of event data per day into Dynatrace. The yearly consumption for Ingest & Process is calculated as follows:
Ingest volume per day
5 GiB
Ingest volume per year
1,825 GiB
5 (GiB per day) × 365 (days)
Consumption per year in your local currency
1,825 (GiB per year) × ingest price as per your rate card
After processing, enriched data of 9 GiB (5 GiB × 1.8 for enrichment) is added to storage daily and retained for 35 days. The monthly consumption (after a ramp-up period of 35 days) for Retain is calculated as follows:
Retained volume for 1 day
9 GiB
5 (GiB data per day) × 1.8 (enrichment)
Retained volume for 35 days
315 GiB
9 (GiB per day) × 35 (days)
Consumption per day in your local currency
315 (GiB) × retain price per day as per your rate card
Consumption per year in your local currency
315 (GiB) × retain price per day as per your rate card × 365 (days)
If you add the same amount of processed data to storage daily and the retention period is set to 365 days, the monthly consumption (after a ramp-up of 365 days in this case) for Retain is calculated as follows:
Retained volume for 1 day
9 GiB
5 (GiB data per day) × 1.8 (enrichment)
Retained volume for 365 days
3,285 GiB
9 (GiB per day) × 365 (days)
Consumption per year in your local currency
3,285 (GiB) × retain price per day as per your rate card x 365 (days)
Let's assume that to resolve incidents and analyze performance issues your team executes DQL queries with a total of 60 GiB of data read per day. The monthly consumption for Query is calculated as follows:
Data volume read per day
60 GiB
Data volume read per year
21,900 GiB
60 (GiB per day) × 365 (days)
Consumption per year in your local currency
21,900 (GiB per year) × query price as per rate card
The total annual consumption for this example scenario of 35 days of data retention is the sum of the yearly consumption for Ingest & Process, Retain, and Query.
Your organization's consumption of each Dynatrace capability accrues costs towards your annual commitment as defined in your rate card. Your Dynatrace Platform Subscription provides daily updates about accrued usage and related costs. You can access these details anytime via Account Management (Subscription > Overview > Cost and usage details > Events – Ingest & Process or Retain or Query > Actions > View details) or the Dynatrace Platform Subscription API.
On the Capability cost and usage analysis page, select a specific environment to analyze that environment’s cost and usage for a specific capability. On the environment level, Dynatrace provides pre-made Notebooks for each capability you can use for detailed analysis (Actions > View details).
The following DQL query provides an overview of total Events – Ingest & Process usage in gibibytes:
fetch dt.system.events| filter event.kind == "BILLING_USAGE_EVENT" and event.type == "Events - Ingest & Process"| dedup event.id| summarize {`Total GiB` = sum(billed_bytes)}
The example below shows the total usage visualized as a single-value chart.
The following DQL query provides an overview of Ingest & Process usage by bucket.
fetch dt.system.events| filter event.kind == "BILLING_USAGE_EVENT" and (event.type == "Events - Ingest & Process" )| join [fetch dt.system.buckets], kind:leftOuter, on: { left[usage.event_bucket] == right[name] }| dedup event.id| summarize {billed_bytes = sum(billed_bytes)}, by:{timestamp, event.id, right.display_name, event.type}| makeTimeseries `Total GiB`=sum(billed_bytes), by:{right.display_name, event.type}, time: timestamp
The example below shows the daily usage visualized as a line chart.
The following DQL query provides the hourly Ingest & Process usage
fetch dt.system.events| filter event.kind == "BILLING_USAGE_EVENT" and event.type == "Events - Ingest & Process"| dedup event.id| summarize {`bucket billed_bytes`=sum(billed_bytes)}, by:{billing_period = bin(timestamp, 1h), usage.event_bucket}| summarize {`Total GiB`=sum(`bucket billed_bytes`), `Total billed buckets`=collectDistinct(record(`Bucket` = usage.event_bucket, `Bucket GiB` = `bucket billed_bytes`))}, by:{`Billing period`=billing_period}
The example below shows the hourly usage by bucket visualized in a nested table view.
The following DQL query provides the hourly Retain usage by bucket
fetch dt.system.events| filter event.kind == "BILLING_USAGE_EVENT" and event.type == "Events - Retain"| summarize {usage.event_bucket = takeLast(usage.event_bucket), billed_bytes = takeLast(billed_bytes)}, by:{billing_period = bin(timestamp, 1h), event.id}| fieldsAdd bytes_and_bucket = record(bucket = usage.event_bucket, billed_bytes = billed_bytes)| summarize {`total billed_bytes` = sum(billed_bytes), `billed_bytes by bucket` = collectDistinct(bytes_and_bucket)}, by:{billing_period}| fields billing_period, `total billed_bytes`, `billed_bytes by bucket`
The example below shows the hourly usage by bucket visualized in a nested table view
The following DQL query provides an overview of total Events – Query usage in gibibytes scanned:
fetch dt.system.events| filter event.kind == "BILLING_USAGE_EVENT"| filter event.type == "Events - Query" or event.type == "Events - Query - SaaS"| dedup event.id| summarize {data_read_GiB = sum(billed_bytes / 1024 / 1024 / 1024.0)}, by: {startHour = bin(timestamp, 1d)}
The example below shows the daily usage by bucket visualized in a line chart
The following DQL query provides an overview of the Events – Query usage by application:
fetch dt.system.events| filter event.kind == "BILLING_USAGE_EVENT"| filter event.type == "Events - Query" or event.type == "Events - Query - SaaS"| fieldsAdd query_id = if(event.version == "1.0", event.id, else: query_id)| dedup event.id| summarize {data_read_GiB = sum(billed_bytes / 1024 / 1024 / 1024.0),Query_count = countDistinctExact(query_id)}, by: {App_context = client.application_context, application_detail = client.source, User = user.email}| fieldsAdd split_by_user = record(data_read_GiB, App_context, application_detail, User, Query_count)| summarize {split_by_user = arraySort(collectArray(split_by_user), direction: "descending"),data_read_GiB = sum(data_read_GiB),Query_count = sum(Query_count)}, by:{App_context, application_detail}| fieldsAdd split_by_user = record(App_context = split_by_user[][App_context], application_detail = split_by_user[][application_detail], User = split_by_user[][User], data_read_GiB = split_by_user[][data_read_GiB], data_read_pct = (split_by_user[][data_read_GiB] / data_read_GiB * 100), Query_count = split_by_user[][Query_count])| fieldsAdd split_by_user = if(arraySize(split_by_user) == 1, arrayFirst(split_by_user)[User], else: split_by_user)| fieldsAdd application_details = record(data_read_GiB, App_context, application_detail, split_by_user, Query_count)| summarize {application_details = arraySort(collectArray(application_details), direction: "descending"),data_read_GiB = sum(data_read_GiB),Query_count = toLong(sum(Query_count))}, by:{App_context}| fieldsAdd application_details = record(App_context = application_details[][App_context], application_detail = application_details[][application_detail], split_by_user = application_details[][split_by_user], data_read_GiB = application_details[][data_read_GiB], data_read_pct = application_details[][data_read_GiB] / data_read_GiB * 100, Query_count = application_details[][Query_count])| fieldsAdd key = 1| fieldsAdd total = lookup([fetch dt.system.events| filter event.kind == "BILLING_USAGE_EVENT" and (event.type == "Events - Query" or event.type == "Events - Query - SaaS")| dedup event.id| summarize total = sum(billed_bytes / 1024 / 1024 / 1024.0)| fieldsAdd key = 1], sourceField: key, lookupField:key)[total]| fields App_context, application_details, data_read_GiB, data_read_pct = data_read_GiB / total * 100, Query_count| sort data_read_GiB desc
The example below shows the usage by application visualized in a nested table view