The metric selector is a powerful instrument for specifying which metrics you want to read via the GET metric data points request or in the Advanced mode of Data Explorer.
In addition, you can transform the resulting set of data points. These transformations modify the plain metric data.
Even if you are building a selector to use in an API call, we recommend that you create your query using the Code tab of Data Explorer, which offers built-in tools (for example, auto-completion) to help you construct the query.
Many Dynatrace metrics can be referenced with finer granularity using dimensions. For example, the builtin:host.disk.avail metric has two dimensions:
Query a metric with the GET metric descriptor call to obtain information about available dimensions—you can find them in the dimensionDefinitions field of the metric descriptor.
{"dimensionDefinitions": [{"key": "dt.entity.host","name": "Host","displayName": "Host","index": 0,"type": "ENTITY"},{"key": "dt.entity.disk","name": "Disk","displayName": "Disk","index": 1,"type": "ENTITY"}]}
Wherever you see the <dimension>
placeholder in the example syntax, you can select a specific dimension of the metric. You can reference a dimension by its key. For example, for builtin:host.disk.avail these are dt.entity.host and dt.entity.disk.
Transform operations modify the list of dimensions by adding or removing them. Subsequent transformations operate on the modified list of dimensions. Query the metric descriptor with preceding transformations (for example, builtin:host.disk.avail:names) to view the new list of available dimensions.
Dynatrace keeps only the top X dimension tuples (the exact number depends on the metric, aggregation, timeframe, and other factors). All other dimension tuples are aggregated into one, called the remainder dimension.
If the query result includes this dimension, the dimensions
and dimensionMap
value will be null
. However, if the dimensionMap
does not contain an entry at all, then this is not the remainder dimension, but rather a literal null
value.
You need to specify a metric key to get the timeseries for it. You can also specify multiple metric keys separated by commas (for example, metrickey1,metrickey2
).
When using the data explorer, metric key sections beginning with special characters need to be escaped with quotes (""
). For example,
custom.http5xx:splitBy():auto
custom."5xx_errors":splitBy():auto
After selecting a metric, you can apply transformations to its data. You can combine any number of transformations. The metric selector string is evaluated from left to right. Each successive transformation is applied to the result of the previous transformation. Let's consider an example:
builtin:host.cpu.user:sort(value(max,descending)):limit(10)
This selector queries the data for the builtin:host.cpu.usage metric, sorts the results by the maximum CPU usage, and returns the series for the top 10 hosts.
Dynatrace provides you with a rich set of transformations to manipulate the series data points according to your needs. Below you can find a listing of all available transformations the metric selector offers.
The amount of raw data available in Dynatrace makes it challenging to present the data in a meaningful way. To improve the readability, Dynatrace applies a time aggregation, aligning the data to time slots. You can define the aggregation method via the aggregation transformation.
Even if you don't specify any aggregation transformation, some aggregation applies nevertheless, using the default transformation of the metric. Applying the auto
transformation has the same effect.
Available aggregations vary for each metric. You can check the available aggregations (and the default aggregation) via the GET metric descriptor call—look for the aggregationTypes and defaultAggregation fields.
The resolution of the resulting time series depends on factors such as the query timeframe and the age of the data. You can, to an extent, control the resolution via the resolution query parameter of the GET metric data points request. The finest available resolution is one minute. Additionally, you can aggregate all data points of a time series into a single data point—use the fold transformation for that.
To illustrate the time aggregations, let's consider an example of the CPU usage (builtin:host.cpu.usage
) metric.
{"metricId": "builtin:host.cpu.usage","displayName": "CPU usage %","description": "Percentage of CPU time currently utilized.","unit": "Percent","dduBillable": false,"created": 0,"lastWritten": 1668607995463,"entityType": ["HOST"],"aggregationTypes": ["auto","avg","max","min"],"transformations": ["filter","fold","limit","merge","names","parents","timeshift","sort","last","splitBy","lastReal","setUnit"],"defaultAggregation": {"type": "avg"},"dimensionDefinitions": [{"key": "dt.entity.host","name": "Host","displayName": "Host","index": 0,"type": "ENTITY"}],"tags": [],"metricValueType": {"type": "unknown"},"scalar": false,"resolutionInfSupported": true}
Because its default transformation is avg
, if you query data points without applying any aggregation, you will obtain the average CPU usage for each time slot of the resulting time series.
To obtain the maximum CPU usage per time slot, use the selector below.
builtin:host.cpu.usage:max
If you want the single highest usage of a timeframe, you can apply the fold transformation.
builtin:host.cpu.usage:fold(max)
Each metric might carry numerous time series for various dimensions. Space aggregation eases the access to dimensions you're interested in by merging everything else together.
Let's consider an example of the Session count - estimated (builtin:apps.other.sessionCount.osAndGeo
) metric.
{"metricId": "builtin:apps.other.sessionCount.osAndGeo:names","displayName": "Session count - estimated (by OS, geolocation) [mobile, custom]","description": "","unit": "Count","dduBillable": false,"created": 0,"lastWritten": 1668609851154,"entityType": ["CUSTOM_APPLICATION","MOBILE_APPLICATION"],"aggregationTypes": ["auto","value"],"transformations": ["filter","fold","limit","merge","names","parents","timeshift","sort","last","splitBy","lastReal","setUnit"],"defaultAggregation": {"type": "value"},"dimensionDefinitions": [{"key": "dt.entity.device_application.name","name": "dt.entity.device_application.name","displayName": "dt.entity.device_application.name","index": 0,"type": "STRING"},{"key": "dt.entity.device_application","name": "Application","displayName": "Mobile or custom application","index": 1,"type": "ENTITY"},{"key": "dt.entity.os.name","name": "dt.entity.os.name","displayName": "dt.entity.os.name","index": 2,"type": "STRING"},{"key": "dt.entity.os","name": "Operating system","displayName": "OS","index": 3,"type": "ENTITY"},{"key": "dt.entity.geolocation.name","name": "dt.entity.geolocation.name","displayName": "dt.entity.geolocation.name","index": 4,"type": "STRING"},{"key": "dt.entity.geolocation","name": "Geolocation","displayName": "Geolocation","index": 5,"type": "ENTITY"}],"tags": [],"metricValueType": {"type": "unknown"},"scalar": false,"resolutionInfSupported": true,"warnings": ["The field dimensionCardinalities is only supported for untransformed single metric keys and was ignored."]}
The metric splits the time series based on application, operating system, and geographic location. If you want to investigate data for a particular application regardless of OS and location, you can apply the splitBy transformation as shown below.
builtin:apps.other.sessionCount.osAndGeo:splitBy("dt.entity.device_application")
You can even merge all dimensions into one by omitting the argument of the transformation. Let's look at the CPU usage (builtin:host.cpu.usage
) metric again. In the example below, the transformation merges measurements of all your hosts into a single time series.
builtin:host.cpu.usage:splitBy()
Another way to narrow down the data output is by applying the filter transformation. For example, you can filter time series based on a certain threshold—for details, see the description of the series
condition.
In combination with space aggregation, you can build powerful selectors like the one below, which reads the maximum pod count for the preproduction
Kubernetes cluster split by a cloud application.
builtin:kubernetes.pods:filter(eq("k8s.cluster.name","preproduction")):splitBy("dt.entity.cloud_application"):max
You can also filter data based on monitored entities by using the power of the entity selector. The selector below reads the CPU usage for all hosts that have the easyTravel
tag.
builtin:host.cpu.usage:filter(in("dt.entity.host",entitySelector("type(~"HOST~"),tag(~"easyTravel~")")))
:<aggregation>
Specifies the aggregation of the returned data points. The following aggregation types are available:
Syntax
Description
:auto
Applies the default aggregation. To check the default aggregation, query a metric with the GET metric descriptors call and check the defaultAggregation field.
:avg
Calculates the arithmetic mean of all values from the time slot. All null
values are ignored.
:count
Takes the count of the values in the time slot. All null
values are ignored.
:max
Selects the highest value from the time slot. All null
values are ignored.
:min
Selects the lowest value from the time slot. All null
values are ignored.
:percentile(99.9)
Calculates the Nth percentile, where N is between 0
and 100
(inclusive).
:sum
Sums all values from the time slot. All null
values are ignored.
:value
Takes a single value as is. Only applicable to previously aggregated values and metrics that support the value
aggregation.
Syntax |
|
Arguments |
|
The default transformation replaces null
values in the payload with the specified value.
When always
is not specified, a pre-transformed time series must have at least one data point for the transformation to work; if the time series doesn't have any data points, it remains empty after transformation.
:delta
The delta transformation replaces each data point with the difference from the previous data point (0
if the difference is negative). The first data point of the original set is omitted from the result.
You must apply an aggregation transformation before using the delta transformation.
:filter(<condition1>,<condition2>,<conditionN>)
The filter transformation filters the response by the specified criteria. It enables you to filter the data points by a secondary dimension, as entitySelector supports only the first dimension, which is an entity. The combination of scope and filter transformation helps you maximize data filtering efficiency.
The :filter
transformation supports the following conditions.
prefix("<dimension>","<expected prefix>")
suffix("<dimension>","<expected suffix>")
contains("<dimension>","<expected contained>")
eq("<dimension>","<expected value>")
ne("<dimension>","<value to be excluded>")
eq
condition. The dimension with the specified name is excluded from the response.in("<dimension>",entitySelector("<selector>")
existsKey("<dimension>")
remainder("<dimension>")
series(<aggregation>,<operator>(<reference value>))
Quotes ("
) and tildes (~
) that are part of the dimension key or dimension value (including entity selector syntax) must be escaped with a tilde (~
).
The series
condition filters the time-aggregated value of the data points for a series by the provided criterion. That is, the specified aggregation is applied and then this single value result is compared to the reference value using the specified operator.
For example, for series(avg, gt(10))
, the average over all data points of the series is calculated first, and then this value is checked to see whether it is greater than 10. If a series does not match this criterion, it is removed from the provided result. That is, the series
operator cannot be used to filter individual data points of a series. To filter individual data points, you need to use the partition transformation.
The condition supports the following aggregations and operators.
count
min
max
avg
sum
median
percentile(N)
, with N in the 0
to 100
range.value
lt
: lower thanle
: lower than or equal toeq
: equalne
: not equalgt
: greater thange
: greater than or equal toEach condition can be a combination of subconditions.
Syntax
Description
and(<subcondition1>,<subcondition2>,<subconditionN>)
All subconditions must be fulfilled.
or(<subcondition1>,<subcondition2>,<subconditionN>)
At least one subcondition must be fulfilled.
not(<subcondition>)
Reverses the subcondition. For example, it turns contains into does not contain.
:filter(or(eq("k8s.cluster.name","Server ~"North~""),eq("k8s.cluster.name","Server ~"West~"")))
Filters data points to those delivered by either Server "North" or Server "West".
:filter(and(prefix("App Version","2."),ne("dt.entity.os","OS-472A4A3B41095B09")))
Filters data points to those delivered by an application of major version 2 that is not run on the OS-472A4A3B41095B09 operating system.
:fold(<aggregation>)
The fold transformation combines a data points list into a single data point. To get the result in a specific aggregation, specify the aggregation as an argument. If the specified aggregation is not supported, the default aggregation is used. For example, :fold(median)
on a gauge metric equals to :fold(avg)
because median is not supported and avg is the default. If an aggregation has been applied in the transformation chain before, the argument is ignored.
:last<aggregation>
:lastReal<aggregation>
The last transformation returns the most recent data point from the query timeframe. To get the result in a specific aggregation, specify the aggregation as an argument. If the specified aggregation is not supported, the default aggregation is used. For example, :last(median)
on a gauge metric equals to :last(avg)
because median is not supported and avg is the default. If an aggregation has been applied in the transformation chain before, the argument is ignored.
If the metric before transformation contains multiple tuples (unique combinations of metric—dimension—dimension value), the most recent timestamp is applied for all tuples. To obtain the actual last timestamp, use the lastReal
operator.
:limit(2)
The limit transformation limits the number of tuples (unique combinations of metric—dimension—dimension value) in the response. Only the first X tuples are included in the response; the rest are discarded.
To ensure that the required tuples are at the top of the result, apply the sort transformation before using the limit.
:merge("<dimension0>","<dimension1>","<dimensionN>")
"
) and tildes (~
) that are part of the dimension key must be escaped with a tilde (~
).The merge transformation removes the specified dimensions from the result. All series/values that have the same dimensions after the removal are merged into one. The values are recalculated according to the selected aggregation.
You can apply any aggregation to the result of the merge transformation, including those that the original metric doesn't support.
:names
The names transformation adds the name of the dimension value to the dimensions array and dimensionMap object of the response. The name of each dimension is placed before the ID of the dimension.
:parents
The parents transformation adds the parent of the dimension to the dimensions array and dimensionMap object of the response. The parent of each dimension is placed before the dimension itself.
This transformation works only if the dimension entity is part of another, bigger entity. For example, PROCESS_GROUP_INSTANCE
is always the child of the HOST
it runs on. The following relationships are supported.
Syntax |
|
Arguments |
|
The partition transformation splits data points of a series based on the specified criteria. It introduces a new dimension (the partition dimension), with the value determined by a partition criterion. Data points from the original series are distributed between one or several new series according to partition criteria. In each new series, data points that don't pass the criterion or are already taken by another criterion are replaced with null
.
A single transformation can contain several partitions. These are evaluated from top to bottom; the first matching partition applies.
Each partition must contain a value for the partition dimension that will mark the passed data points and a criterion by which to filter data points.
Note that you can use either the value
or the dimension
condition, but not both, in a single partition operator. You can always use otherwise
conditions.
You need to apply an aggregation transformation before using value conditions within the partition transformation.
value("<partition dimension value>",<criterion>)
The following criteria are available:
lt(X)
le(X)
eq(X)
ne(X)
ge(X)
gt(X)
range(X,Y)
or(<criterion1>,<criterionN>)
and(<criterion1>,<criterionN>)
not(<criterion>)
dimension("<partition dimension value>",<criterion>)
The following criteria are available.
prefix("<dimension>","<expected prefix>")
suffix("<dimension>","<expected suffix>")
contains("<dimension>","<expected contained>")
eq("<dimension>","<expected value>")
ne("<dimension>","<value to be excluded>")
eq
condition—the dimension with the specified name is excluded from the response.or(<criterion1>,<criterionN>)
and(<criterion1>,<criterionN>)
not(<criterion>)
otherwise("<partition dimension value>")
A universal operator matching all values—use it at the end of a partition chain as the default case.
The following partition transformation is used in this example.
:partition("Action duration",value("slow",gt(200)),value("fast",lt(100)),value("normal",otherwise))
It adds the Action duration dimension to the metric and splits data points into three categories based on it.
fast
for actions faster than 100
millisecondsslow
for actions slower than 200
millisecondsnormal
for all other actionsSyntax |
|
Argument | The base of the rate. The following values are supported:
|
The rate transformation converts a count-based metric (for example, bytes) into a rate-based metric (for example, bytes per minute).
Any argument can be modified by an integer factor. For example, 5m
means per 5 minutes rate. If no argument is specified, the per 1 minute rate is used.
You can use the rate transformation with any metric that supports the VALUE
aggregation. Query a metric with the GET metric descriptors call to obtain information about available aggregations. If the metric doesn't support the VALUE
aggregation, apply the aggregation transformation first and then the rate transformation.
Syntax |
|
Arguments |
|
The rollup transformation smoothes data points, removing any spikes from the requested timeframe.
The transformation takes each data point from the query timeframe, forms a rollup window by looking into past data points (so the initial data point becomes the last point of the window), calculates the requested aggregation of all original values, and then replaces each data point in the window with the result of the calculation.
For example, if you specify :rollup(avg,5m)
and the resolution of the query is one minute, the transformation takes a data point, adds the four previous data point to form a rollup window, and then uses the average of these five datapoints to calculate the final datapoint value.
2w-windowDuration
in the past.:smooth(skipfirst)
skipfirst
strategy is supported.The smooth transformation smooths a series of data points after a data gap (one or several data points with the value of null
).
The skipfirst
strategy replaces the first data point after the data gap with null
.
:sort(<sorting key 1>,<sorting key 2>)
The sort transformation specifies the order of tuples (unique combinations of metric—dimension—dimension value) in the response. You can specify one or several sorting criteria. The first criterion is used for sorting. Further criteria are used for tie-breaking. You can choose the direction of the sort:
ascending
descending
To sort results by the value of a dimension, use the dimension("<dimension>",<direction>)
key. Quotes ("
) and tildes (~
) that are part of the dimension key must be escaped with a tilde (~
).
Entity dimensions are sorted lexicographically (0..9a..z
) by Dynatrace entity ID values.
String dimensions are sorted lexicographically.
To sort results by metric data points in a dimension, use the value(<aggregation>,<direction>
) key.
The following aggregations are available:
avg
count
max
median
min
sum
percentile(N)
, with N in the 0
to 100
range.value
The aggregation is used only for sorting and doesn't affect the returned data points.
The sorting is applied to the resulting data points of the whole transformation chain before the sort transformation. If the transformation chain doesn't have an aggregation transformation, the sorting is applied to the default aggregation of the metric.
:splitBy("<dimension0>","<dimension1>","<dimensionN>")
"
) and tildes (~
) that are part of the dimension key must be escaped with a tilde (~
).The split by transformation keeps the specified dimensions in the result and merges all remaining dimensions. The values are recalculated according to the selected aggregation. Only metric series that have each of the specified dimensions are considered.
You can apply any aggregation to the result of the split by transformation, including those that the original metric doesn't support.
Syntax |
|
Argument | The period of the shift. The following values are supported:
|
The time shift transformation shifts the timeframe specified by the from and to query parameters and maps the resulting data points to timestamps from the original timeframe. It can help you hand data from different time zones or put yesterday's and today's data on the same chart for visual comparison.
A positive argument shifts the timeframe into the future; a negative argument shifts the timeframe into the past. In either case, there's a limit of 5 years.
You can use this transformation to handle data from different time zones.
Let's consider an example with a timeframe from 1615550400000
(March 12, 2021 13:00 CET) to 1615557600000
(March 12, 2021 15:00 CET) and a time shift of -1d
(one day into the past).
1615464000000
(March 11, 2021 13:00 CET) to 1615471200000
(March 11, 2021 15:00 CET).1615465800000
(March 11, 2021 13:30 CET) will be returned as 1615552200000
(March 12, 2021 13:30 CET).:setUnit(<unit>)
The setUnit transformation sets the unit in the metric metadata.
This transformation does not affect data points.
:toUnit(<sourceUnit>,<targetUnit>)
The toUnit transformation converts data points from the source unit to target unit. If specified units are incompatible, the original unit is persisted and a warning is included in the response.
You must apply an aggregation transformation before using the unit transformations.