As of May 22, 2024, the Early Adopter release phase for custom Davis AI events on Grail ends, and the licensing of all event types within the product category Events powered by Grail will be harmonized under the platform pricing model described below. This means you'll be billed for ingesting, retaining, and querying event data via DQL, as is already the case for business events. In addition, with Davis AI problems & events, Kubernetes events, security events, and others to come, even more event types will be added to Events powered by Grail.
While there are no additional costs or licensing involved in the default monitoring and reporting of built-in event types via OneAgent or cloud integrations, you have the option to configure custom events and/or event-ingestion channels. Such event-related customizations generate additional consumption because they require significantly more processing and analytical power than built-in event ingestion via OneAgent or cloud integrations.
The usage of Distributed Tracing and
Services is included with Dynatrace.
No consumption is generated by these apps.
The unit of measure for consumed events is the data volume of ingested events in gibibytes (GiB).
The overall consumption model for Events powered by Grail is based on three dimensions of data usage. These correspond to the respective capabilities on your rate card.
Ingest & Process
Retain
Query
Definition
Ingested data is the amount of raw data in bytes sent to Dynatrace before enrichment and transformation.
Retained data is the amount of data saved to storage after data parsing, enrichment, transformation, and filtering but before compression.
Queried data is the data read during the execution of a DQL query.
Unit of measure
Per gibibyte (GiB)
Per gibibyte-day (GiB-day)
Per gibibyte scanned (GiB scanned)
Custom created/ingested or subscribed events that might be configured for an environment include:
Usage of different kinds of events affects your consumption differently, as outlined in the table below.
Usage recorded for business events, custom Davis AI and Kubernetes events, Security events, and other custom generic events (in particular event data stored in custom buckets using the Storage Management app) results in consumption for Ingest & Process, Retain, and Query. The following specifics apply for:
Davis AI problems and events 1: Ingest & Process, Retain (limited included up to 15 months) and Query are included. If you extend the retention period beyond 15 months globally, Retain charges will apply as per your rate card.
Kubernetes warning events 2: A pod-hour includes 60 Kubernetes warning events per pod with a default retention period of 15 months. The Kubernetes warning events are pooled across all pods. The consumption is calculated in 15-minute intervals. Query usage generated from within the Kubernetes app is included. For queries originating from Dashboards, Notebooks, and Workflows, Query charges apply as per your rate card. For details, please refer to Kubernetes Platform Monitoring billing.
Security events 3: Generally, Ingest & Process, Retain, and Query are charged according to the price on your rate card for Events powered by Grail capabilities.
In cases where security events are stored in the default_securityevents_builtin
bucket, Ingest & Process and Retain are limited included for 3 years, and Query is included for usage generated in the Vulnerabilities app.
For queries originating from Dashboards, Notebooks, and Workflows, Query charges apply as per your rate card. Note: The bucket above has been updated to align with the new Grail security events table. For the complete list of updates and actions needed to accomplish the migration, follow the steps in the Grail security table migration guide.
Synthetic events 4: Retain and Queries originating from Synthetic application are limited included for default retention period of 35 days. For queries originating from Dashboards, Notebooks, and Workflows, charges do not apply.
Custom generic events: Generic or customer-defined events, which are ingested via API and are not listed above or which are ingested and retained in custom buckets, are charged according to the price on your rate card for Events powered by Grail capabilities.
Here's what's included with the Ingest & Process data-usage dimension:
Concept
Explanation
Data delivery
Delivery of events via OneAgent, RUM JavaScript, or Generic Event Ingestion API (via ActiveGate)
Topology enrichment
Enrichment of events with data source and topology metadata
Data transformation
Data-retention control
Manage data retention periods of incoming events based on bucket assignment rules
Conversion to timeseries
Create metrics from event attributes (note that creating custom metrics generates additional consumption beyond the consumption for ingestion and processing.)
Apply the following calculation to determine your consumption for the Ingest & Process data-usage dimension:
(number of gibibytes ingested) × (gibibyte price as per your rate card) = consumption in your local currency
Be aware that data enrichment and processing can increase your data volume typically by 0.5-1.0 kB per event. Depending on the source of the data, and the attributes and metadata added during processing, the total data volume after processing can increase by a factor of 1.5 or more. Processing can also be used to drop unwanted event attributes to reduce retained data volume.
Here's what's included with the Retain data-usage dimension:
Concept
Explanation
Data availability
Retained data is accessible for analysis and querying until the end of the retention period. Events retention is defined at the bucket level, ensuring tailored retention periods for specific events.
Retention periods
Choose a retention period
Apply the following calculation to determine your consumption for the Retain data-usage dimension:
(number of GiB of processed data ingested per day) × (retention period in days) × (GiB-day price as per your rate card) × (number of days that data is stored) = consumption in your local currency
retention period in days
is based on the retention-period of the storage bucket under analysis.
(For example, 35 days if you're analyzing the default_logs
bucket.)
number of days data is stored
reflects the period during which the data is stored.
(For example, 30 days if you're analyzing the monthly cost, or 365 days for a full year.)
Query data usage occurs when:
Here's what's included with the Query data-usage dimension:
Concept
Explanation
On-read parsing
Aggregation
Perform aggregation, summarization, or statistical analysis of data in events across specific timeframes or time patterns (for example, data occurrences in 30-second or 10-minute intervals), mathematical, or logical functions.
Reporting
Create reports or summaries with customized fields (columns) by adding, modifying, or dropping existing event attributes.
Context
Use DQL to analyze event data in context with relevant data on the Dynatrace platform, for example, user sessions or distributed traces.
Apply the following calculation to determine your consumption for the Query data-usage dimension:
(number of GiB of uncompressed data read during query execution) × (GiB scanned price as per your rate card) = consumption in your local currency
Grail applies various optimizations to improve response time and reduce cost. In some cases, these optimizations will identify portions of data that are not relevant to the query result—the price for scanning that data is discounted by 98%.
The impact of Grail's scan optimizations varies based on data and query attributes. It may evolve as Dynatrace improves Grail's query intelligence.
The following example calculations show how each data-usage dimension contributes to overall usage and consumption.
Let's assume that you ingest 5 GiB of event data per day into Dynatrace. The yearly consumption for Ingest & Process is calculated as follows:
5 GiB
5 GiB * 365 = 1,825 GiB
1,825 GiB * (Ingest & Process price as per your rate card) = Cost
If you retain the ingested 5 GiB of event data, a total of 9 GiB of enriched data is added to storage (5 GiB times an enrichment factor of 1.8).
If you ingest an additional 5 GiB of data per day (i.e., 9 GiB after enrichment), for a total of 35 days, consumption for Retain (after the 35th day) is calculated as follows:
5 GiB * 1.8 = 9 GiB
9 GiB * 35 days = 315 GiB
315 GiB * (Retain price as per your rate card) = Cost
315 GiB * 365 * (Retain price as per your rate card) = Cost
Let's assume that to resolve incidents and analyze performance issues your team executes DQL queries with a total of 60 GiB of data read per day. The yearly consumption for Query is calculated as follows:
60 GiB
60 GiB * 365 days = 21,900 GiB
21,900 GiB * (Query price as per your rate card) = Cost
The total annual consumption for this example scenario is the sum of the yearly consumption for Ingest & Process, Retain, and Query:
1,825 GiB * Ingest & Process price as per your rate card
315 GiB * 365 days * Retain price as per your rate card)
21,900 GiB * Query price as per your rate card)
Your organization's consumption of each Dynatrace capability accrues costs towards your annual commitment as defined in your rate card. Your Dynatrace Platform Subscription provides daily updates about accrued usage and related costs. You can access these details anytime via Account Management (Subscription > Overview > Cost and usage details > Events – Ingest & Process or Retain or Query > Actions > View details) or the Dynatrace Platform Subscription API.
On the Capability cost and usage analysis page, select a specific environment to analyze that environment’s cost and usage for a specific capability. On the environment level, Dynatrace provides pre-made Notebooks for each capability you can use for detailed analysis (Actions > View details).
The following DQL query provides an overview of total Events – Ingest & Process usage in gibibytes:
fetch dt.system.events| filter event.kind == "BILLING_USAGE_EVENT" and event.type == "Events - Ingest & Process"| dedup event.id| summarize {`Total GiB` = sum(billed_bytes)}
The example below shows the total usage visualized as a single-value chart.
The following DQL query provides an overview of Ingest & Process usage by bucket.
fetch dt.system.events| filter event.kind == "BILLING_USAGE_EVENT" and (event.type == "Events - Ingest & Process" )| join [fetch dt.system.buckets], kind:leftOuter, on: { left[usage.event_bucket] == right[name] }| dedup event.id| summarize {billed_bytes = sum(billed_bytes)}, by:{timestamp, event.id, right.display_name, event.type}| makeTimeseries `Total GiB`=sum(billed_bytes), by:{right.display_name, event.type}, time: timestamp
The example below shows the daily usage visualized as a line chart.
The following DQL query provides the hourly Ingest & Process usage
fetch dt.system.events| filter event.kind == "BILLING_USAGE_EVENT" and event.type == "Events - Ingest & Process"| dedup event.id| summarize {`bucket billed_bytes`=sum(billed_bytes)}, by:{billing_period = bin(timestamp, 1h), usage.event_bucket}| summarize {`Total GiB`=sum(`bucket billed_bytes`), `Total billed buckets`=collectDistinct(record(`Bucket` = usage.event_bucket, `Bucket GiB` = `bucket billed_bytes`))}, by:{`Billing period`=billing_period}
The example below shows the hourly usage by bucket visualized in a nested table view.
The following DQL query provides the hourly Retain usage by bucket
fetch dt.system.events| filter event.kind == "BILLING_USAGE_EVENT" and event.type == "Events - Retain"| summarize {usage.event_bucket = takeLast(usage.event_bucket), billed_bytes = takeLast(billed_bytes)}, by:{billing_period = bin(timestamp, 1h), event.id}| fieldsAdd bytes_and_bucket = record(bucket = usage.event_bucket, billed_bytes = billed_bytes)| summarize {`total billed_bytes` = sum(billed_bytes), `billed_bytes by bucket` = collectDistinct(bytes_and_bucket)}, by:{billing_period}| fields billing_period, `total billed_bytes`, `billed_bytes by bucket`
The example below shows the hourly usage by bucket visualized in a nested table view
The following DQL query provides an overview of total Query usage in gibibytes scanned:
fetch dt.system.events| filter event.kind == "BILLING_USAGE_EVENT"| filter event.type == "Events - Query" or event.type == "Events - Query - SaaS"| dedup event.id| summarize {data_read_bytes = sum(billed_bytes)}, by: {startHour = bin(timestamp, 1d)}
The example below shows the daily usage by bucket visualized in a line chart
The following DQL query provides an overview of the Query usage by application:
fetch dt.system.events| filter event.kind == "BILLING_USAGE_EVENT"| filter event.type == "Events - Query" or event.type == "Events - Query - SaaS"| fieldsAdd query_id = if(event.version == "1.0", event.id, else: query_id)| dedup event.id| summarize {data_read_bytes = sum(billed_bytes),Query_count = countDistinctExact(query_id)}, by: {App_context = client.application_context, application_detail = client.source, User = user.email}| fieldsAdd split_by_user = record(data_read_bytes, App_context, application_detail, User, Query_count)| summarize {split_by_user = arraySort(collectArray(split_by_user), direction: "descending"),data_read_bytes = sum(data_read_bytes),Query_count = sum(Query_count)}, by:{App_context, application_detail}| fieldsAdd split_by_user = record(App_context = split_by_user[][App_context], application_detail = split_by_user[][application_detail], User = split_by_user[][User], data_read_bytes = split_by_user[][data_read_bytes], data_read_pct = (split_by_user[][data_read_bytes] / data_read_bytes * 100), Query_count = split_by_user[][Query_count])| fieldsAdd split_by_user = if(arraySize(split_by_user) == 1, arrayFirst(split_by_user)[User], else: split_by_user)| fieldsAdd application_details = record(data_read_bytes, App_context, application_detail, split_by_user, Query_count)| summarize {application_details = arraySort(collectArray(application_details), direction: "descending"),data_read_bytes = sum(data_read_bytes),Query_count = toLong(sum(Query_count))}, by:{App_context}| fieldsAdd application_details = record(App_context = application_details[][App_context], application_detail = application_details[][application_detail], split_by_user = application_details[][split_by_user], data_read_bytes = application_details[][data_read_bytes], data_read_pct = application_details[][data_read_bytes] / data_read_bytes * 100, Query_count = application_details[][Query_count])| fieldsAdd key = 1| fieldsAdd total = lookup([fetch dt.system.events| filter event.kind == "BILLING_USAGE_EVENT" and (event.type == "Events - Query" or event.type == "Events - Query - SaaS")| dedup event.id| summarize total = sum(billed_bytes),| fieldsAdd key = 1], sourceField: key, lookupField:key)[total]| fields App_context, application_details, data_read_bytes, data_read_pct = data_read_bytes / total * 100, Query_count| sort data_read_bytes desc
The example below shows the usage by application visualized in a nested table view