Metrics API - Frequently asked questions
In Dynatrace, metric data points are stored in time slots of different resolutions. The finest granularity of a time slot is one minute. The timestamps returned by the metrics query endpoint are the end times of these time slots.
For example, if the current time is 09:24 a.m. and you query the last 6 hours at a 1-hour resolution, the timestamp of the last data point will be today at 10:00 a.m. For details, see Timeframe note.
The data points returned by the query endpoint are time-aggregated. Depending on the query timeframe, the resolution of the data points may be minutes, hours, days, or even years. If you query a larger timeframe, the resolution of your data is likely to be coarser, causing greater values for aggregations such as
If you want to have comparable results for different resolutions, use the rate transformation. For example,
:rate(1m) provides you with the value per minute.
For example, the following query might return values higher than 100% even though the metric's unit is
The root cause of this problem is that, when you apply an aggregation transformation (by calling
:avg in the example above), the semantics of the metric are lost and unavailable for transformations that occur later in the transformation chain. That is, when the fold transformation is called, the information that the values should be averaged is no longer available, and the aggregation
sum is applied instead.
To prevent this issue, do not perform an aggregation before a fold transformation.
If a top x is applied to a dimension of a metric, only x dimension values are retained. All other dimension values are booked into the
remainder dimension, which has the value
By default, the query response only contains the IDs of the monitored entities (for example,
If you want to have the entity name in the response as well, you need to use the names transformation. The pretty name is then available in the
dimensionMap under the dimension key
dt.entity.<entityType>.name for example,
There are multiple reasons why a metric expression could yield an empty result:
The dimension keys of the metrics used in the expression do not match.
If you have metrics with different dimension keys, you need to align the dimensions of the metrics to make a calculation possible. You can use either the split by or merge transformation for this purpose. Consider this query:1builtin:host.cpu.iowait2/3builtin:host.disk.throughput.read
It will produce the
Metric expression contains non-matching dimension-keys.error, because the builtin:host.cpu.iowait metric has only one dimension (dt.entity.host), while builtin:host.disk.throughput.read has two (dt.entity.host and dt.entity.disk). To make the query work, you need to get rid of the disk dimension (for example, by using the merge transformation).1builtin:host.cpu.iowait2/3builtin:host.disk.throughput.read:merge(dt.entity.disk)
The dimension values do not match.
For example, the following expression will deliver an empty result because different dimension values cannot be joined.1builtin:host.cpu.iowait:filter(eq(dt.entity.host,HOST-001))2/3builtin:host.cpu.iowait:filter(eq(dt.entity.host,HOST-002))
The solution in this case is to drop the dimensions completely using the splitBy transformation.1builtin:host.cpu.iowait:filter(eq(dt.entity.host,HOST-001)):splitBy()2/3builtin:host.cpu.iowait:filter(eq(dt.entity.host,HOST-002)):splitBy()
One more reason why there are no matching tuples: applying a limit transformation to an operand of the expression may cause matching dimensions to be filtered out. Always apply the limit transformation to the result of an expression and not to its operands.
Consider the following query, which attempts to add top-10 CPU usage times to top-10 CPU idle times.1builtin:host.cpu.usage:sort(value(avg,descending)):limit(10)2+3builtin:host.cpu.idle:sort(value(avg,descending)):limit(10)
If you have a large environment with hundreds of hosts, it is unlikely that the 10 hosts with the highest CPU usage are among the 10 hosts with the highest CPU idle time. The operands won't have matching tuples, therefore the result of the expression will be empty. The solution is to apply the limit to the result of the expression instead:1(2 builtin:host.cpu.usage3 +4 builtin:host.cpu.idle5)6:sort(value(auto,descending))7:limit(10)
There is no data for a metric.
Consider this example of a ratio expression, where we calculate the error ratio for key user actions:1builtin:apps.other.keyUserActions.reportedErrorCount.os2/3builtin:apps.other.keyUserActions.requestCount.os
If there are many requests but not a single error in your timeframe, the result will be empty, though an error ratio of
0would be more meaningful. You can achieve that with the
For example, you can run the following query even though the metric does not natively support percentiles.
However, because the metric has only one dimension (dt.entity.host), no values are in fact space-aggregated. Consequently, the
percentile(50) aggregation will deliver the same result as
percentile(99), because the percentile estimation is based on only one data point in this case.
Metrics with different payload types cannot share the same key. Therefore:
countmetrics are automatically suffixed with
.countunless their metric key already ends with
gaugemetrics are automatically suffixed with
.gaugeif their key ends with
If you ingest a dimension with an empty value, the whole dimension tuple is dropped at ingestion time. For instance, if you ingest
myMetric,dimEmpty="" 1, the dimension
dimEmpty is removed.