Traces powered by Grail

The Traces powered by Grail capability gives customers access to:

The use of Distributed Tracing Distributed Tracing and Services Services is included with Dynatrace. No query consumption is generated by these apps.

Traces powered by Grail consumption is based on three dimensions of data usage:

  • Traces - Ingest & Process
  • Traces - Retain
  • Traces - Query

The unit of measure for consumption is gibibytes (GiB). The table below describes the dimensions and units. Specific information about each dimension is provided in the following sections.

Traces - Ingest & Process
Traces - Retain
Traces - Query
Definition
The amount of data sent to Dynatrace.

Billed volume is calculated before transformation in service detection and OpenPipeline. The measured size is independent of the data source or protocol.
The amount of data saved to storage after data parsing, enrichment, transforming, and filtering.

Data is measured in the last OpenPipeline step before data is stored in buckets. Billed volumes are calculated before compression. The measured size is independent of the data source or protocol.
The volume of data scanned during a DQL query's execution, whether executed directly or via an app. (Note: Distributed Tracing Distributed Tracing and Services Services don't generate query consumption.)
Unit of measure
per gibibyte ingested ("GiB")
per gibibyte saved in storage, per day ("GiB-day")
per gibibyte scanned during query execution ("GiB scanned")

Traces - Ingest & Process

Traces - Ingest & Process usage occurs when:

Concept
Explanation
Data ingest
Distributed trace data ingested from the following sources is charged as Traces - Ingest & Process:

- Via the OpenTelemetry OTLP Trace Ingest API from non-Full-Stack sources.
- Via serverless functions.
- Extended trace ingest for Full-Stack Monitoring(if the customer explicitly requests extended trace ingest).

Enrichment of spans with additional metadata at the source, such as Kubernetes metadata, increase the size of ingested data that is charged as Traces - Ingest & Process.
Data processing via OpenPipeline
- Data processing via OpenPipeline is included in Traces – Ingest & Process. However, it increases the size of span data that is charged as Traces – Retain.
- Topology enrichment based on Dynatrace entities (dt.entity.* entity types) does not increase the billable span size or Traces – Ingest & Process consumption.
- Custom metrics are created from span data and are charged as Metrics - Ingest & Process. These are Grail metric keys and therefore are available only in latest Dynatrace.

Apply the following calculation to determine your consumption for the Traces - Ingest & Process data-usage dimension:

[(GiB ingested via the OpenTelemetry OTLP API) + (GiB ingested via serverless functions) + (GiB of extended ingest)] * (GiB price as per your Rate Card) = consumption

You can also view consumption in pre-made Notebooks.

Traces - Ingest & Process replaces the platform extensions Custom Traces Classic and Serverless Functions Classic. They cannot be used simultaneously.

Dynatrace reserves the right to work with customers to adjust or disable parsing rules, processors, or pipelines that experience service degradation.

Traces - Retain

Traces - Retain usage occurs when:

Concept
Explanation
Data availability
Retained data is accessible for analysis and querying until the end of the retention period (with limitations described in the note below this table).
Retention period
Choose the desired retention period. For trace data, the available retention period ranges from 10 days to 10 years. Trace retention is defined at the bucket level, allowing tailored retention periods for specific traces. Traces - Retain calculation is independent of the trace ingestion source, whether Full-Stack, Mainframe, or Traces - Ingest & Process. The first 10 days of retention are always included.
Topology enrichment
Spans are enriched and processed in OpenPipeline. Enriched data (including Topology enrichment and Data processing as described in Traces - Ingest & Process) is the basis for Traces - Retain usage for data that is stored longer than the included 10 days.
Data processing
Services, endpoints, and failures are detected based on span data.
Data storage control
Spans are filtered or excluded based on content, topology, or metadata. They are routed to a dedicated bucket.

Data availability in certain apps is limited:

  • Distibuted Traces Classic Distributed Traces Classic provides access to only the first 10 days of retained data. This app is superseded by Distributed Tracing Distributed Tracing.
  • Services Classic Services Classic provides access only to the first 10 days of retained data. This app will be superseded by Services Services.
  • Multidimensional Analysis Multidimensional Analysis provides access only to the first 35 days of retained data. This app will be superseded by Notebooks Notebooks.

Apply the following calculation to determine your daily consumption for the Traces - Retain data-usage dimension:

(GiB of uncompressed span data stored for more than 10 days) * (GiB-day rate price as per your Rate Card) = consumption

You can also view consumption in pre-made Notebooks.

Traces - Query

Traces - Query usage occurs when:

Concept
Explanation
DQL query execution
A DQL query scans and fetches data that is stored in Grail. Spans can be joined and analyzed in context with other signals on the Dynatrace platform, such as logs, events, or metrics.
App usage
DQL queries can be executed by:

- Apps such as Notebooks Notebooks, Dashboard Dashboards, Workflows Workflows, and Davis Anomaly Detection - new Davis Anomaly Detection (Note: Distributed Tracing Distributed Tracing and Services Services don't consume any Traces - Query usage.)
- Dashboard tiles that are based on span data trigger the execution of DQL queries on refresh
- Custom apps
- The Dynatrace API

When other data types are also read in a query, this can result in consumption of the corresponding capability, such as Log - Query.

Traces - Query usage is based on the volume of uncompressed span data that is scanned during the execution of a DQL query. Apply the following calculation to determine the maximum potential consumption of a single query execution:

(GiB of uncompressed data read during query execution) * (GiB-scanned price as per your Rate Card) = consumption

You can also view consumption in pre-made Notebooks.

Grail applies various optimizations to improve response time and reduce cost. In some cases, these optimizations will identify portions of data that are not relevant to the query result—the price for scanning that data is discounted by 98%.

The impact of Grail's scan optimizations varies based on data and query attributes. It may evolve as Dynatrace improves Grail's query intelligence.

Consumption details

On the environment level, Dynatrace provides pre-made Notebooks for each DPS capability that you can use for detailed analysis. These are available in Account Management (Subscription > Overview > Cost and usage details > Select the capability > Actions > Open details with Notebooks > Select or create Notebook).

Traces - Ingest & Process consumption details

Billed usage

The following chart provides details on Traces - Ingest & Process consumption, aggregated into 15-minute increments. When you select a given increment, the info pop-up shows:

  • The source of the consumption:
    • billed_fullstack: Extended ingest from Full-Stack monitored sources.
    • billed_otlp: Via the OpenTelemetry OTLP API.
    • billed_serverless: Via a serverless function.
  • The billed data volume.

Traces - Ingest & Process notebook for billed usage

Billed usage–Full-Stack extended ingest details

The following chart provides details on Traces - Ingest & Process consumption that is triggered by Full-Stack extended ingest.

The four data sources are:

  • The included Full-Stack Monitoring volume (blue line).
  • The configured Adaptive Traffic Management volume (red line).
  • The real ingest (green line).
  • The extended trace ingest volume that consumes Traces - Ingest & Process (equal to the area between the included volume and the real ingest) (orange line).

This query splits Full-Stack billing into the following licensing types:

  • Adaptive volume charged: OneAgent Trace data that exceeds the included full-stack Trace volume, when the Adaptive Traffic Management limit is specifically configured to capture additional Trace data.
  • Fix-rate charged: OpenTelemetry Trace ingest that originates from Full-Stack monitored hosts and containers, which exceeds the included Trace volume.

Traces - Ingest & Process notebook for billed usage with Full-Stack extended ingest

Traces - Retain consumption details

The following chart shows an example of the amount of data stored in one bucket. The volume increases over time until the configured retention time is reached. Once the retention time is reached, the retained data volume remains approximately stable.

Traces - Retain hourly usage by bucket

Traces - Query consumption details

Daily query usage

The following chart shows query usage over time. It allows you to see peaks and outliers, which can guide further investigation to optimize queries and their usage.

Traces - Query usage over time

Query usage by app

The following chart shows query usage organized by app. You can drill down this data and see query usage split by specific dashboards and users.

Traces - Query usage per app

Query usage by user

The following chart shows query usage organized by user. You can drill down this data and see a breakdown of the apps where a given user executes queries.

Traces - Query usage per user

To proactively manage query consumption, you can create notifications that are triggered by extensive query usage at the app or user level.