Traces powered by Grail overview (DPS)

  • Latest Dynatrace
  • Overview
  • 9-min read

The Traces powered by Grail DPS capability gives customers access to:

  • Distributed trace ingestion for OpenTelemetry via the OTLP API.
  • Distributed trace ingestion for serverless functions.
  • Extended trace ingestion for Full-Stack Monitoring beyond the included trace data volume.
  • Extended trace data retention for up to 10 years.
  • Advanced tracing analytics in Notebooks Notebooks, Dashboards Dashboards, Workflows Workflows, and via API.

This page describes the different tracing capabilities and the features that they provide with a DPS subscription.

For information about how usage of a specific capability translates to consumption of your DPS license commit, see

Traces - Ingest feature overview

Ingest & Process replaces the platform extensions Custom Traces Classic and Serverless Functions Classic. They cannot be used simultaneously.

Ingest & Process usage occurs when:

Concept
Explanation
Data ingest
Distributed trace data ingested from the following sources is charged as Ingest & Process:

- Via the OpenTelemetry OTLP Trace Ingest API from non-Full-Stack sources.
- Via serverless functions.
- Extended trace ingest for Full-Stack Monitoring(if the customer explicitly requests extended trace ingest).

Enrichment of spans with additional metadata at the source, such as Kubernetes metadata, increase the size of ingested data that is charged as Ingest & Process.
Data processing via OpenPipeline
- Data processing via OpenPipeline is included in Traces – Ingest & Process. However, it increases the size of span data that is charged as Traces – Retain.
- Topology enrichment based on Dynatrace entities (dt.entity.* entity types) does not increase the billable span size or Traces – Ingest & Process consumption.
- Custom metrics are created from span data and are charged as Metrics - Ingest & Process. These are Grail metric keys and therefore are available only in latest Dynatrace.

Dynatrace reserves the right to work with customers to adjust or disable parsing rules, processors, or pipelines that experience service degradation.

Traces - Retain feature overview

Retain usage occurs when:

Concept
Explanation
Data availability
Retained data is accessible for analysis and querying until the end of the retention period (with limitations described in the note below this table).
Retention period
Choose the desired retention period. For trace data, the available retention period ranges from 10 days to 10 years. Trace retention is defined at the bucket level, allowing tailored retention periods for specific traces. Retain calculation is independent of the trace ingestion source, whether Full-Stack, Mainframe, or Ingest & Process. The first 10 days of retention are always included.
Topology enrichment
Spans are enriched and processed in OpenPipeline. Enriched data (including Topology enrichment and Data processing as described in Ingest & Process) is the basis for Retain usage for data that is stored longer than the included 10 days.
Data processing
Services, endpoints, and failures are detected based on span data.
Data storage control
Spans are filtered or excluded based on content, topology, or metadata. They are routed to a dedicated bucket.

For Traces - Retain, data availability in certain apps is limited:

  • Distributed Traces Classic Distributed Traces Classic provides access to only the first 10 days of retained data. This app is superseded by Distributed Tracing Distributed Tracing.
  • Services Classic Services Classic provides access only to the first 10 days of retained data. This app will be superseded by Services Services.
  • Multidimensional Analysis Multidimensional Analysis provides access only to the first 35 days of retained data. This app will be superseded by Notebooks Notebooks.

Traces - Query feature overview

Query usage occurs when:

Concept
Explanation
DQL query execution
A DQL query scans and fetches data that is stored in Grail. Spans can be joined and analyzed in context with other signals on the Dynatrace platform, such as logs, events, or metrics.
App usage
DQL queries can be executed by:

- Apps such as Notebooks Notebooks, Dashboards Dashboards, Workflows Workflows, and Davis Anomaly Detection - new Davis Anomaly Detection. (Note: Distributed Tracing Distributed Tracing and Services Services don't consume any Query usage.)
- Dashboard tiles that are based on span data trigger the execution of DQL queries on refresh
- Custom apps
- The Dynatrace API

The usage of Distributed Tracing Distributed Tracing and Services Services is included with Dynatrace. No query consumption is generated by these apps.

When other data types are also read in a query, this can result in consumption of the corresponding capability, such as Log - Query.

Related tags
Dynatrace PlatformApplication Observability