Events powered by Grail

Dynatrace SaaS only

As of May 22, 2024, the Early Adopter release phase for custom Davis AI events on Grail ends, and the licensing of all event types within the product category Events powered by Grail will be harmonized under the platform pricing model described below. This means you'll be billed for ingesting, retaining, and querying event data via DQL, as is already the case for business events. In addition, with Davis AI problems & events, Kubernetes events, security events, and others to come, even more event types will be added to Events powered by Grail.

While there are no additional costs or licensing involved in the default monitoring and reporting of built-in event types via OneAgent or cloud integrations, you have the option to configure custom events and/or event-ingestion channels. Such event-related customizations generate additional consumption because they require significantly more processing and analytical power than built-in event ingestion via OneAgent or cloud integrations.

The unit of measure for consumed events is the data volume of ingested events in gibibytes (GiB).

The overall consumption model for Events powered by Grail is based on three dimensions of data usage (Ingest & Process, Retain, and Query).

Ingest & Process

Retain

Query

Definition

Ingested data is the amount of raw data in bytes sent to Dynatrace before enrichment and transformation.

Retained data is the amount of data saved to storage after data parsing, enrichment, transformation, and filtering but before compression.

Queried data is the data read during the execution of a DQL query.

Unit of measure

per gibibyte (GiB)

per gibibyte-day (GiB-day)

per gibibyte scanned (GiB scanned)

Custom created/ingested or subscribed events that might be configured for an environment include:

  • Any custom event sent to Dynatrace using the Events API v2
  • Any custom event (such as a Kubernetes event) created from log messages by a log processing rule
  • Any custom event created in a processing step in OpenPipeline

Usage of different kinds of events affects your consumption differently, as outlined in the table below.

Event kind
Ingest & Process
Retain
Query
Business events
Billable
Billable
Billable
Custom Davis AI and
Kubernetes events
Billable
Billable
Billable
Davis AI problems
and events
Included
(non-billable)
Limited included
(up to 15 months) 1
Included (non-billable)
Kubernetes warning events
Limited included 2
Limited included 2
Non-billable
for usage generated in Kubernetes app
Security events 3
Billable
Billable
Billable
Synthetic events 4
Included
Limited included
Non-billable
for usage generated in Synthetic app
Custom generic events
Billable
Billable
Billable

Usage recorded for business events, custom Davis AI and Kubernetes events, Security events, and other custom generic events (in particular event data stored in custom buckets using the Storage Management app) results in consumption for Ingest & Process, Retain, and Query. The following specifics apply for:

  • Davis AI problems and events 1: Ingest & Process, Retain (limited included up to 15 months) and Query are included. If you extend the retention period beyond 15 months globally, Events powered by Grail – Retain charges will apply as per your rate card.
  • Kubernetes warning events 2: A pod-hour includes 60 Kubernetes warning events per pod with a default retention period of 15 months. The Kubernetes warning events are pooled across all pods. The consumption is calculated in 15-minute intervals. Query usage generated from within the Kubernetes app is included. For queries originating from Dashboards, Notebooks, and Workflows Events powered by Grail – Query charges apply as per your rate card. For details, please refer to Kubernetes Platform Monitoring billing.
  • Security events 3: Generally, Ingest & Process, Retain, and Query are charged according to the price on your rate card for Events powered by Grail capabilities. In cases where security events are stored in the dedicated bucket default_security_events, Ingest & Process and Retain are limited included for 3 years, and Query is included for usage generated in the Vulnerabilities app. For queries originating from Dashboards, Notebooks, and Workflows, Events powered by Grail – Query charges apply as per your rate card.
  • Synthetic events 4: Retain and Queries originating from Synthetic application are limited included for default retention period of 35 days. For queries originating from Dashboards, Notebooks, and Workflows, Events powered by Grail – Query charges apply as per your rate card.
  • Custom generic events: Generic or customer-defined events, which are ingested via API and are not listed above or which are ingested and retained in custom buckets, are charged according to the price on your rate card for Events powered by Grail capabilities.

Ingest & Process

Here's what's included with the Ingest & Process data-usage dimension:

Concept

Explanation

Data delivery

Delivery of events via OneAgent, RUM JavaScript, or Generic Event Ingestion API (via ActiveGate)

Topology enrichment

Enrichment of events with data source and topology metadata

Data transformation

  • Add, edit, or drop any business event attribute
  • Perform mathematical transformations on numerical values (for example, creating new attributes based on calculations of existing fields)
  • Extract business, infrastructure, application, or other data from raw business events. This can be a single character, string, number, array of values, or other. Extracted data can be turned into a new field, allowing additional querying, filtering, etc.
  • Mask sensitive data by replacing specific business attributes with a masked string

Data-retention control

Manage data retention periods of incoming events based on bucket assignment rules

Conversion to timeseries

Create metrics from event attributes (note that creating custom metrics generates additional consumption beyond the consumption for ingestion and processing.)

Apply the following calculation to determine your consumption for the Ingest & Process data-usage dimension:
(number of gibibytes ingested) × (gibibyte price as per your rate card) = consumption in your local currency

Be aware that data enrichment and processing can increase your data volume typically by 0.5-1.0 kB per event. Depending on the source of the data, and the attributes and metadata added during processing, the total data volume after processing can increase by a factor of 1.5 or more. Processing can also be used to drop unwanted event attributes to reduce retained data volume.

Retain

Here's what's included with the Retain data-usage dimension:

Concept

Explanation

Data availability

Retained data is accessible for analysis and querying until the end of the retention period.

Retention periods

Choose a retention period

  • 10 days (10 days)
  • 2 weeks (15 days)
  • 1 month (35 days) (default)
  • 3 months (95 days)
  • 1 year (372 days)
  • 15 months (462 days)
  • 3 years (1,102 days)
  • 5 years (1,832 days)
  • 7 years (2,562 days)
  • 10 years (3,657 days)

Apply the following calculation to determine your consumption for the Retain data-usage dimension:
(number of GiB of processed data ingested per day) × (retention period in days) × (GiB-day price as per your rate card) × (number of days that data is stored) = consumption in your local currency

  • Retention period in days is based on the retention-period of the storage bucket under analysis. (For example, 35 days if you're analyzing the default_logs bucket.)

  • Number of days data is stored reflects the period during which the data is stored. (For example, 30 days if you're analyzing the monthly cost, or 365 days for a full year.)

Query

Query data usage occurs when:

  • Submitting custom DQL queries in the Logs & Events viewer in advanced mode.
  • Business Analytics Apps (Business Flow, Salesforce Insights, and Carbon Impact)
  • Executing DQL queries in Notebooks, Dashboards, Workflows, Custom apps, and via API.

Here's what's included with the Query data-usage dimension:

Concept

Explanation

On-read parsing

  • Use DQL to query historical events in storage and extract business, infrastructure, or other data across any timeframe, and use extracted data for follow-up analysis.
  • No upfront indexes or schema required for on-read parsing

Aggregation

Perform aggregation, summarization, or statistical analysis of data in events across specific timeframes or time patterns (for example, data occurrences in 30-second or 10-minute intervals), mathematical, or logical functions.

Reporting

Create reports or summaries with customized fields (columns) by adding, modifying, or dropping existing event attributes.

Context

Use DQL to analyze event data in context with relevant data on the Dynatrace platform, for example, user sessions or distributed traces.

Apply the following calculation to determine your consumption for the Query data-usage dimension:
(number of GiB of uncompressed data read during query execution) × (GiB scanned price as per your rate card) = consumption in your local currency

Consumption examples

The following example calculations show how each data-usage dimension contributes to overall usage and consumption.

Step 1 – Ingest & Process

Let's assume that you ingest 5 GiB of event data per day into Dynatrace. The yearly consumption for Ingest & Process is calculated as follows:

Ingest volume per day

5 GiB

Ingest volume per year

1,825 GiB

5 (GiB per day) × 365 (days)

Consumption per year in your local currency

1,825 (GiB per year) × ingest price as per your rate card

Step 2 – Retain

After processing, enriched data of 9 GiB (5 GiB × 1.8 for enrichment) is added to storage daily and retained for 35 days. The monthly consumption (after a ramp-up period of 35 days) for Retain is calculated as follows:

Retained volume for 1 day

9 GiB

5 (GiB data per day) × 1.8 (enrichment)

Retained volume for 35 days

315 GiB

9 (GiB per day) × 35 (days)

Consumption per day in your local currency

315 (GiB) × retain price per day as per your rate card

Consumption per year in your local currency

315 (GiB) × retain price per day as per your rate card × 365 (days)

If you add the same amount of processed data to storage daily and the retention period is set to 365 days, the monthly consumption (after a ramp-up of 365 days in this case) for Retain is calculated as follows:

Retained volume for 1 day

9 GiB

5 (GiB data per day) × 1.8 (enrichment)

Retained volume for 365 days

3,285 GiB

9 (GiB per day) × 365 (days)

Consumption per year in your local currency

3,285 (GiB) × retain price per day as per your rate card x 365 (days)

Step 3 – Query

Let's assume that to resolve incidents and analyze performance issues your team executes DQL queries with a total of 60 GiB of data read per day. The monthly consumption for Query is calculated as follows:

Data volume read per day

60 GiB

Data volume read per year

21,900 GiB

60 (GiB per day) × 365 (days)

Consumption per year in your local currency

21,900 (GiB per year) × query price as per rate card

Step 4 – Total consumption

The total annual consumption for this example scenario of 35 days of data retention is the sum of the yearly consumption for Ingest & Process, Retain, and Query.

Consumption details

Your organization's consumption of each Dynatrace capability accrues costs towards your annual commitment as defined in your rate card. Your Dynatrace Platform Subscription provides daily updates about accrued usage and related costs. You can access these details anytime via Account Management (Subscription > Overview > Cost and usage details > Events – Ingest & Process or Retain or Query > Actions > View details) or the Dynatrace Platform Subscription API.

On the Capability cost and usage analysis page, select a specific environment to analyze that environment’s cost and usage for a specific capability. On the environment level, Dynatrace provides pre-made Notebooks for each capability you can use for detailed analysis (Actions > View details).

Ingest & Process

The following DQL query provides an overview of total Events – Ingest & Process usage in gibibytes:

fetch dt.system.events
| filter event.kind == "BILLING_USAGE_EVENT" and event.type == "Events - Ingest & Process"
| dedup event.id
| summarize {`Total GiB` = sum(billed_bytes)}

The example below shows the total usage visualized as a single-value chart.

Events on Grail (DPS)

The following DQL query provides an overview of Ingest & Process usage by bucket.

fetch dt.system.events
| filter event.kind == "BILLING_USAGE_EVENT" and (event.type == "Events - Ingest & Process" )
| join [fetch dt.system.buckets], kind:leftOuter, on: { left[usage.event_bucket] == right[name] }
| dedup event.id
| summarize {billed_bytes = sum(billed_bytes)}, by:{timestamp, event.id, right.display_name, event.type}
| makeTimeseries `Total GiB`=sum(billed_bytes), by:{right.display_name, event.type}, time: timestamp

The example below shows the daily usage visualized as a line chart.

Events on Grail (DPS)

The following DQL query provides the hourly Ingest & Process usage

fetch dt.system.events
| filter event.kind == "BILLING_USAGE_EVENT" and event.type == "Events - Ingest & Process"
| dedup event.id
| summarize {`bucket billed_bytes`=sum(billed_bytes)}, by:{billing_period = bin(timestamp, 1h), usage.event_bucket}
| summarize {`Total GiB`=sum(`bucket billed_bytes`), `Total billed buckets`=collectDistinct(record(`Bucket` = usage.event_bucket, `Bucket GiB` = `bucket billed_bytes`))}, by:{`Billing period`=billing_period}

The example below shows the hourly usage by bucket visualized in a nested table view.

Events on Grail (DPS)

Events on Grail (DPS)

Retain

The following DQL query provides the hourly Retain usage by bucket

fetch dt.system.events
| filter event.kind == "BILLING_USAGE_EVENT" and event.type == "Events - Retain"
| summarize {usage.event_bucket = takeLast(usage.event_bucket), billed_bytes = takeLast(billed_bytes)}, by:{billing_period = bin(timestamp, 1h), event.id}
| fieldsAdd bytes_and_bucket = record(bucket = usage.event_bucket, billed_bytes = billed_bytes)
| summarize {`total billed_bytes` = sum(billed_bytes), `billed_bytes by bucket` = collectDistinct(bytes_and_bucket)}, by:{billing_period}
| fields billing_period, `total billed_bytes`, `billed_bytes by bucket`

The example below shows the hourly usage by bucket visualized in a nested table view

Events on Grail (DPS)

Query

The following DQL query provides an overview of total Events – Query usage in gibibytes scanned:

fetch dt.system.events
| filter event.kind == "BILLING_USAGE_EVENT"
| filter event.type == "Events - Query" or event.type == "Events - Query - SaaS"
| dedup event.id
| summarize {
data_read_GiB = sum(billed_bytes / 1024 / 1024 / 1024.0)
}, by: {
startHour = bin(timestamp, 1d)
}

The example below shows the daily usage by bucket visualized in a line chart

Events on Grail (DPS)

The following DQL query provides an overview of the Events – Query usage by application:

fetch dt.system.events
| filter event.kind == "BILLING_USAGE_EVENT"
| filter event.type == "Events - Query" or event.type == "Events - Query - SaaS"
| fieldsAdd query_id = if(event.version == "1.0", event.id, else: query_id)
| dedup event.id
| summarize {
data_read_GiB = sum(billed_bytes / 1024 / 1024 / 1024.0),
Query_count = countDistinctExact(query_id)
}, by: {
App_context = client.application_context, application_detail = client.source, User = user.email
}
| fieldsAdd split_by_user = record(data_read_GiB, App_context, application_detail, User, Query_count)
| summarize {
split_by_user = arraySort(collectArray(split_by_user), direction: "descending"),
data_read_GiB = sum(data_read_GiB),
Query_count = sum(Query_count)
}, by:{
App_context, application_detail
}
| fieldsAdd split_by_user = record(App_context = split_by_user[][App_context], application_detail = split_by_user[][application_detail], User = split_by_user[][User], data_read_GiB = split_by_user[][data_read_GiB], data_read_pct = (split_by_user[][data_read_GiB] / data_read_GiB * 100), Query_count = split_by_user[][Query_count])
| fieldsAdd split_by_user = if(arraySize(split_by_user) == 1, arrayFirst(split_by_user)[User], else: split_by_user)
| fieldsAdd application_details = record(data_read_GiB, App_context, application_detail, split_by_user, Query_count)
| summarize {
application_details = arraySort(collectArray(application_details), direction: "descending"),
data_read_GiB = sum(data_read_GiB),
Query_count = toLong(sum(Query_count))
}, by:{
App_context
}
| fieldsAdd application_details = record(App_context = application_details[][App_context], application_detail = application_details[][application_detail], split_by_user = application_details[][split_by_user], data_read_GiB = application_details[][data_read_GiB], data_read_pct = application_details[][data_read_GiB] / data_read_GiB * 100, Query_count = application_details[][Query_count])
| fieldsAdd key = 1
| fieldsAdd total = lookup([
fetch dt.system.events
| filter event.kind == "BILLING_USAGE_EVENT" and (event.type == "Events - Query" or event.type == "Events - Query - SaaS")
| dedup event.id
| summarize total = sum(billed_bytes / 1024 / 1024 / 1024.0)
| fieldsAdd key = 1
], sourceField: key, lookupField:key)[total]
| fields App_context, application_details, data_read_GiB, data_read_pct = data_read_GiB / total * 100, Query_count
| sort data_read_GiB desc

The example below shows the usage by application visualized in a nested table view

Events on Grail (DPS)