Metrics powered by Grail

The consumption model for Metrics powered by Grail is based on three dimensions of data usage (Ingest & Process, Retain, and Query).

The unit of measure for Ingest & Process is the number of ingested metric data points. A metric data point is a single measurement of a metric. Some built-in metrics, for example, Host Monitoring metrics, are included in Full-Stack Monitoring and Infrastructure Monitoring and are not billed. Host Monitoring also includes a number of metric data points or Custom Metrics that are deducted from your environment's total data-point consumption. The number of included Custom Metric data points is dependent on the total monitored GiB-hours of your deployment and the OneAgent mode (Full-Stack or Infrastructure Monitoring), see built-in metrics for details.

For the Retain dimension, the unit of measure for consumed data volume is gibibytes (GiB).

15 months (462 days) of 1-minute granularity is included with Metrics powered by Grail. Metrics that you choose to retain beyond that period are charged.

Query is included with Ingest & Process.

Ingest & Process

Retain

Query

Definition

Ingested data is the number of metric data points sent to Dynatrace.

Retained data is the amount of data saved to storage after data parsing, enrichment, transformation, and filtering but before compression.

Queried data is the data read during the execution of a DQL query.

Unit of measure

Number of data points

Gibibyte-day (GiB-day)

Included

Ingest & Process

Here's what's included with the Ingest & Process data-usage dimension:

Concept

Explanation

Data delivery

Delivery of metrics via OneAgent, extensions or ingest API

Topology enrichment

Enrichment of metrics with data source and topology metadata

Data transformation

  • Rollup of data to reduced granularity to optimize queries for longer timeframes
  • Use of efficient data structures to derive metrics from high volume spans like service response time metrics

Data-retention control

Manage data retention period of incoming metrics based on bucket assignment rules

Included metrics reduce data point consumption

Not all metrics generate data points that affect billable monitoring consumption. Included (non-billable) metric data points are subtracted from the total product usage for every environment before your monitoring consumption is calculated.

Usage stream for the Metrics - Ingest & Process DPS capability.

Total platform usage

The total platform usage includes every metric data point ingested by Grail.

If many data points within a given metric interval are stored as one data point in Grail, only one data point is counted for total product usage.

Included non-billable metric usage

Unless your environment has included metrics, all metric usage is billable.

Included non-billable metrics are classified according to their metric key.

  • Metric keys starting with dt.* are an exception; their usage is included and non-billable.

  • Metric keys starting with dt.cloud.aws.*, dt.cloud.azure.*, dt.osservice.* and the metrics dt.service.request.count, dt.service.request.cpu_time, dt.service.request.failure_count, and dt.service.request.response_time are again exceptions that count against your monitoring consumption and are billable.

  • Within dt.cloud.aws.* and dt.cloud.azure.* there is another exception. The metrics dt.cloud.aws.az.running, dt.cloud.azure.region.vms.initializing, dt.cloud.azure.region.vms.running, dt.cloud.azure.region.vms.stopped, dt.cloud.azure.vm_scale_set.vms.initializing, dt.cloud.azure.vm_scale_set.vms.running, dt.cloud.azure.vm_scale_set.vms.stopped are non-billable.

Metric key
Usage
*
Billable
dt.*
Non-billable
dt.cloud.aws.*
Billable
dt.cloud.aws.az.running
Non-billable
dt.cloud.azure.*
Billable
dt.cloud.azure.region.vms.initializing
Non-billable
dt.cloud.azure.region.vms.running
Non-billable
dt.cloud.azure.region.vms.stopped
Non-billable
dt.cloud.azure.vm_scale_set.vms.initializing
Non-billable
dt.cloud.azure.vm_scale_set.vms.running
Non-billable
dt.cloud.azure.vm_scale_set.vms.stopped
Non-billable
dt.osservice.*
Billable
dt.service.request.count
Billable
dt.service.request.cpu_time
Billable
dt.service.request.failure_count
Billable
dt.service.request.response_time
Billable

Mainframe monitoring

Metrics that originate from mainframe monitored entities are licensed under Mainframe Monitoring and are non-billable.

Legacy metrics coming from Extension Framework 1.0 (EF 1.0)

Legacy metrics that originate from EF 1.0 extensions starting with legacy.dotnet.perform, legacy.tomcat, and legacy.containers are non-billable.

Self-monitoring metrics/system metrics

System metrics are metrics produced and controlled by Dynatrace to implement a functionality or to enable self-observability. These metrics, which are stored in the bucket dt_system_metrics, are non-billable.

Included non-billable metric data points

Full-Stack Monitoring

Metrics that originate from Full-Stack monitored entities are subject to limited metric data point consumption. 900 metrics data points are included for each charged GiB of memory of each Full-Stack monitored entity for each 15 minute monitoring interval.

  • A host with 15 GiB grants 15 x 900 = 13,500 included data points for each 15 minute interval.
  • An app-only monitored container with a maximum used memory of 1.25 GiB in a 15 minute interval grants 1.25 x 900 = 1,125 included data points.

Infrastructure Monitoring

Metrics that originate from Infrastructure-monitored entities are subject to limited metric data point consumption. 1,500 metrics data points are included for entities being infrastructure monitored every 15 minutes. 1,500 metrics data points are included for each Infrastructure-monitored entity for each 15 minute monitoring interval.

For instance, an Infrastructure-monitored host (regardless of memory size) grants 1,500 included data points for each 15 minute interval.

Total billable Usage

Every metric data point that doesn’t qualify for non-billable usage is included in your total billable usage.

Metric dimensions increase data point consumption

A single metric that is written once per minute will consume 525,600 metric data points annually:

1 metric data point × 60 min × 24 h × 365 days = 525,600 metric data points/year

Note that a single metric can have multiple dimensions. For example, if you have the same metric for 2 instances of your cloud service, you will consume 2 metric data points:

Metric key
dt.entity.dynamo_db_table
Value
cloud.aws.dynamo.requests.latency
DYNAMO_DB_TABLE-41043ED33F90F271
21.78
cloud.aws.dynamo.requests.latency
DYNAMO_DB_TABLE-707BF9DD5C975159
4.47

2 instances × 1 metric data point × 60 min × 24 h × 365 days = 1,051,200 metric data points/year

Metric data points are not billed based on an increase in dimensions, but rather by the increased number of metric data points. If dimensions are added, but the number of metric data points remains the same, then billable metric data points usage does not change:

Metric key
dt.entity.dynamo_db_table
Operation
Value
cloud.aws.dynamo.requests.latency
DYNAMO_DB_TABLE-41043ED33F90F271
DeleteItem
21.78
cloud.aws.dynamo.requests.latency
DYNAMO_DB_TABLE-707BF9DD5C975159
DeleteItem
4.47

Therefore, in this case, the same number of metric data points is consumed as shown in the calculation above.

Retain

Here's what's included with the Retain data-usage dimension:

Concept

Explanation

Data availability

Retained data is accessible for analysis and querying until the end of the retention period.

Retention periods

Choose a desired retention period. For the default metrics bucket, the available retention period ranges from 15 months (462 days) to 10 years (3,657 days).

15 months (462 days) of 1-minute granularity is included with Metrics powered by Grail, hence it is the volume in excess of 462 days that affects monitoring consumption.

Apply the following calculation to determine your consumption for the Retain data-usage dimension:
(number of GiB-days) × (retention period in days) × (GiB-day price as per your rate card) × (number of days that data is stored in excess of 462 days) = consumption in your local currency

Query

Querying metrics using the timeseries command is always included, irrespective of the origin of the query (Dashboards, Notebooks, Apps or API).

timeseries avg(dt.host.cpu.usage)

Queries involving other data types generally incur usage at each query, even when they output time series format, for example using the maketimeseries command:

fetch logs | maketimeseries count()

If you frequently use predefined maketimeseries or summarize commands, it might be more cost-effective for you to create a log metric. Log metrics are regular metrics that are billed for Ingest & Process (and Retain above 15 months), but not for Query.

Log data ingestion is charged separately according to Log Management & Analytics – Ingest & Process, as per your rate card. Included (non-billable) metric data points are subtracted from the total product usage for every environment before your monitoring consumption is calculated.

The above example could be turned into a log metric log.all_logs_count, consuming 525,600 metric data points per year (assuming at least one log record per minute), and the query would then become timeseries sum(log.all_logs_count).

Assuming the equivalent query using logs (fetch logs | maketimeseries count()):

  • scans 40 GiB over the last 2 hours,
  • is triggered 10 times per day each day.

The query usage over the logs version would be 40 GiB * 10 * 365 = 146,000 GiB per year. When multiplied by the Log Management & Analytics – Query price on your rate card, the Metrics powered by Grail – Ingest & Process cost of the log metric is two orders of magnitude less than the Query cost of the command using the logs, and even more so if the amount of logs scanned increases (due to longer timeframes, for example).

Consumption examples

Following are example calculations which show how each data-usage dimension contributes to the overall usage and consumption.

Step 1 – Ingest & Process

For example, say that you produce 55 million billable metric data points per day which you ingest into Metrics powered by Grail. The monthly consumption for Ingest & Process is calculated as follows:

Ingest volume per day

55 million data points

Ingest volume per month

1,650 million data points

55 million (data points per day) × 30 (days)

Consumption per month

1,650 million (data points per month) × Metrics powered by Grail – Ingest & Process price as per your rate card

Step 2 – Retain

Following the Ingest & Process step, your enriched data is retained on an on-going basis. If you ingest 1.5 billion data points per day, 9 GiB of data might be added daily to your storage. Assuming you don't change the default retention period, metric data will be retained for 15 months (462 days) and there will be no billable Retain usage.

In this example, you increased the retention period to 5 years (1,832 days). The monthly consumption (after 5 years) for Retain is calculated as follows:

Retained volume for 1 day

9 GiB

Retained volume for 1,832 days in excess of the included 462 days

12,330 GiB

9 (GiB data per day) × (1,832 - 462) (days)

Consumption per day

12,330 (GiB) × Metrics powered by Grail - Retain price per day as per your rate card

Consumption per month

12,330 (GiB) × Metrics powered by Grail - Retain price per day as per your rate card × 30 (days)

Step 3 – Query

Querying metrics using the timeseries command is included.

Step 4 – Total consumption

The total monthly consumption in this example, including 5 years of extended data retention, is the sum of the monthly consumption for Metrics powered by Grail - Ingest & Process and Metrics powered by Grail - Retain.