Dynatrace SaaS only
Dynatrace offers two pricing options for Log Management and Analytics:
Ingest & Process, Retain, and Query
Ingest & Process, and Retain with Included Queries.
The unit of measure for consumed data volume is gibibytes (GiB) as described below.
What's included with the Ingest & Process data-usage dimension?
Apply the following calculation to determine your consumption for the Ingest & Process data-usage dimension:
consumption = (number of GiBs ingested) × (GiB price as per your rate card)
Data enrichment and processing can increase your data volume significantly. Depending on the source of the data, the technology, the attributes, and metadata added during processing, the total data volume after processing can increase by a factor of 2 or more.
Dynatrace reserves the right to work with customers to adjust or disable parsing rules, processors, or pipelines that are experiencing service degradation.
Here's what's included with the Retain data-usage dimension:
Apply the following calculation to determine your daily consumption for the Retain data-usage dimension:
consumption per day = (volume of uncompressed logs stored in GiB) x (GiB-day price for Retain on your rate card)
Query data usage occurs when:
What's included with the Query data-usage dimension?
Apply the following calculation to determine your consumption for the Query data-usage dimension:
consumption = (number of GiB of uncompressed data read during query execution) × (GiB scanned price as per your rate card)
Query consumption is based on the GiB of data scanned to return a result. The highest potential cost for a query is equal to the volume of logs within the query’s search range times the price on your rate card. As each scan is executed, Grail applies various proprietary optimizations to improve response time and reduce cost. In some cases, these optimizations will identify portions of data that are not relevant to the query result; the cost for scanning that data is discounted by 98%. The impact of Grail’s scan optimizations varies based on data and query attributes and may evolve over time as Dynatrace improves Grail’s query intelligence.
Beginning with Dynatrace SaaS version 1.303, you can choose to subscribe to Retain with Included Queries or the existing usage-based model for Ingest & Processing, Retain and Query.
Customers who choose Retain with Included Queries are not charged for the included queries that are run within the Dynatrace Platform. In any 24-hour period, customers with this pricing option are entitled to run queries with aggregate scanned-GIB volume up to 15 times the volume of log data that is retained at that time. In the event that usage exceeds the included volume of queries, Dynatrace reserves the right to throttle query throughput.
Yes, the retention period for log buckets ranges from a minimum of 10 days to a maximum of 35 days. If you need to retain your data for more than 35 days, consider switching to our usage-based model with Ingest & Process, Retain, and Query.
The data volume stored in a bucket configured for Retain with Included Queries defines the query volume that is included in your Retain with Included Queries consumption.
Included query usage per day = (GiB of logs retained) × 15
Apply the following calculation to determine your daily consumption for the Retain with Included Queries data-usage dimension:
consumption per day = (volume of uncompressed logs stored in GiB) x (GiB-day price for Retain with Included Queries on your rate card)
Your organization's consumption of each Dynatrace capability accrues costs towards your annual commitment as defined in your contract. Your Dynatrace Platform Subscription provides daily updates about accrued usage and related costs. You can access these details anytime via Account Management (Subscription > Overview > Cost and usage details > [select DPS capability] > Actions > View details) or the Dynatrace Platform Subscription API.
On the Capability cost and usage analysis page, select a specific environment to analyze that environment’s cost and usage for a specific capability. On the environment level, Dynatrace provides built-in metrics and / or pre-made Notebooks for each capability you can use for detailed analysis (Actions > View details).
The table below shows the list of metrics you can use to monitor the consumption details for Log Management and Analytics. To use them in Data Explorer, enter Log Management and Analytics into the Search field. These metrics are also available via the Environment API.
Key: builtin:billing.log.ingest.usage
Dimension: Byte
Resolution: 1 hour
Description: Number of raw bytes sent to Dynatrace before enrichment and transformation in hourly intervals.
Key: builtin:billing.log.retain.usage
Dimension: Byte
Resolution: 1 hour
Description: Number of bytes saved to storage after data parsing, enrichment, transformation, and filtering but before compression.
Key: builtin:billing.log.query.usage
Dimension: Byte
Resolution: 1 hour
Description: Number of bytes read during the execution of a DQL query, including sampled data.
You can monitor the total number of bytes ingested for Ingest & Process in hourly intervals for any selected timeframe using the metric Log Management and Analytics usage - Ingest & Process. The example below shows usage aggregated in 1-hour intervals between 2023-09-04 and 2023-09-11 (Last 7 days).
You can monitor the total bytes stored for Retain in hourly intervals for any selected timeframe using the metric Log Management and Analytics usage - Retain. The example below shows usage aggregated in 1-hour intervals between 2023-09-04 and 2023-09-11 (Last 7 days).
The following DQL query provides the hourly Retain usage by bucket
fetch dt.system.events| filter event.kind == "BILLING_USAGE_EVENT" and event.type == "Log Management & Analytics - Retain"| summarize {usage.event\_bucket = takeLast(usage.event_bucket), billed_bytes = takeLast(billed_bytes)}, by:{billing_period = bin(timestamp, 1h), event.id}| fieldsAdd bytes_and_bucket = record(bucket = usage.event_bucket, billed_bytes = billed_bytes)| summarize {`total billed_bytes` = sum(billed_bytes), `billed_bytes by bucket` = collectDistinct(bytes_and_bucket)}, by:{billing_period}| fields billing_period, `total billed_bytes`, `billed_bytes by bucket`
The example below shows the hourly usage by bucket visualized in a nested table view:
You can monitor the total scanned bytes for Query in hourly intervals for any selected timeframe using the metric Log Management and Analytics usage - Query. The example below shows usage aggregated in 1-hour intervals between 2023-09-04 and 2023-09-11 (Last 7 days).
The following DQL query provides an overview of total Log Management & Analytics – Query usage in gibibytes scanned:
fetch dt.system.events| filter event.kind == "BILLING_USAGE_EVENT"| filter event.type == "Log Management & Analytics - Query"| dedup event.id| summarize {data_read_GiB = sum(billed_bytes)}, by: {startHour = bin(timestamp, 1d)}
The example below shows the daily query usage visualized in a line chart for the last 30 days:
You can monitor the total bytes stored in hourly intervals for any selected timeframe using the metric Log Management and Analytics usage - Retain. The example below shows usage aggregated in 1-hour intervals between 2024-09-15 and 2024-10-15 (Last 30 days).
The following DQL query provides an overview of the Log Management & Analytics – Retain with Included Queries hourly usage in gibibytes retained by bucket:
fetch dt.system.events| filter event.kind == "BILLING_USAGE_EVENT" and event.type == "Log Management & Analytics - Retain"| summarize {usage.event_bucket = takeLast(usage.event_bucket), billed_bytes = takeLast(billed_bytes)}, by:{billing_period = bin(timestamp, 1h), event.id}| fields billing_period, billed_bytes, usage.event_bucket| makeTimeseries max(billed_bytes), by:{usage.event_bucket}, time: billing_period, interval:1h
The example below shows the hourly usage visualized in a bar chart for the last 30 days: