Metric key | Metric name | Metric description | Unit | Dimensions |
---|---|---|---|---|
dt.synthetic.browser.availability | Availability rate (by location) [Browser monitor] | The availability rate of browser monitors. | % | dt.entity.synthetic_test dt.entity.synthetic_location dt.synthetic.monitored_entity_ids dt.security_context interpolated |
dt.synthetic.browser.duration | Duration [Browser monitor] | The sum of step durations. The summary of step durations, calculated as: {end timestamp of the last action} - {step start timestamp}. This metric is not based on the metric in Synthetic Classic. | ms | dt.entity.synthetic_test dt.entity.synthetic_location dt.security_context dt.synthetic.monitored_entity_ids |
dt.synthetic.browser.executions | Execution count (by status) [Browser monitor] | The number of monitor executions. | count | dt.entity.synthetic_test dt.entity.synthetic_location dt.maintenance_window_ids dt.synthetic.monitored_entity_ids result.state result.status.code result.status.message dt.security_context |
dt.synthetic.browser.classic.total_duration | Total duration (classic) [Browser monitor] | The sum of all step total durations. | ms | dt.entity.synthetic_test dt.entity.synthetic_location dt.security_context dt.synthetic.monitored_entity_ids |
Metric key | Metric name | Metric description | Unit | Dimensions |
---|---|---|---|---|
dt.synthetic.browser.step.duration | Duration (step) [Browser monitor] | The duration of individual browser monitor steps. The former key: dt.synthetic.browser.event.duration The duration of the step, calculated as: {end timestamp of the last action} - {step start timestamp}. This metric is not based on the metric in Synthetic Classic. | ms | dt.entity.synthetic_test dt.entity.synthetic_test_step dt.entity.synthetic_location dt.security_context dt.synthetic.monitored_entity_ids |
dt.synthetic.browser.step.executions | Execution count (event) (by status) [Browser monitor] | The number of step executions. The former key: dt.synthetic.browser.step.executions . | count | dt.entity.synthetic_test dt.entity.synthetic_test_step dt.entity.synthetic_location dt.maintenance_window_ids dt.synthetic.monitored_entity_ids result.state result.status.code result.status.message dt.security_context |
dt.synthetic.browser.step.classic.total_duration | Total duration (classic) (step) [Browser monitor] | The total duration of individual browser monitor steps. | ms | dt.synthetic.monitored_entity_ids dt.security_context dt.entity.synthetic_location dt.entity.synthetic_test dt.entity.synthetic_test_step |
Metric key | Metric name | Metric description | Unit | Dimensions |
---|---|---|---|---|
dt.synthetic.browser.availability.excluding_maintenance_windows | Availability rate excluding maintenance windows (by location) [Browser monitor] | The availability rate of browser monitors excluding maintenance windows. The metric doesn't exist in the latest Dynatrace. | % | dt.entity.synthetic_test dt.entity.synthetic_location |
dt.synthetic.browser.event.duration | Total duration (step) [Browser monitor] | The duration of individual browser monitor steps. Replaced by dt.synthetic.browser.step.duration . | ms | dt.entity.synthetic_test dt.entity.synthetic_test_step dt.entity.synthetic_location dt.security_context dt.synthetic.monitored_entity_ids |
dt.synthetic.browser.event.executions | Execution count (event) (by status) [Browser monitor] | The number of step executions. Replaced by dt.synthetic.browser.step.executions . | count | dt.entity.synthetic_test dt.entity.synthetic_test_step dt.entity.synthetic_location dt.maintenance_window_ids dt.synthetic.monitored_entity_ids result.state result.status.code result.status.message dt.security_context |
The following use cases show how you can use the metrics in your DQL queries.
Suppose you need to identify monitors that have availability issues.
The following query uses the dt.synthetic.browser.availability
metric and the dt.entity.synthetic_test
dimension to return the monitors sorted by their availability, with the least available monitors listed first.
timeseries {availability_series = avg(dt.synthetic.browser.availability) },by:{ dt.entity.synthetic_test }| fieldsAdd monitor_name = entityName(dt.entity.synthetic_test)| summarize { monitor_name = takeFirst(monitor_name),availability = avg(arrayAvg(availability_series)) },by: {dt.entity.synthetic_test}| sort availability asc
The raw response looks like this:
{"records": [{"dt.entity.synthetic_test": "SYNTHETIC_TEST-06AFE815FF214729","monitor_name": "Browser monitor1","availability": 31.746031746031743},{"dt.entity.synthetic_test": "SYNTHETIC_TEST-094D2E6AA99D2252","monitor_name": "Browser monitor2","availability": 46.03174603174603},{"dt.entity.synthetic_test": "SYNTHETIC_TEST-0C362AEC36824874","monitor_name": "Browser monitor3","availability": 100},
Suppose you need to analyze the failures of browser monitors attributed to server interactions.
The following query uses the dt.synthetic.browser.event.executions
metric and the dt.entity.synthetic_test
, result.status.message
, and result.status.code
dimensions.
timeseries {execution_series = sum(dt.synthetic.browser.step.executions)}, by:{ result.status.message,result.status.code,dt.entity.synthetic_test},interval: 15m| filter result.status.code != 0| filter (result.status.code < 1000 and result.status.code >= 400) or in(result.status.code, array(10054, 12014, 12183))| summarize { dt.synthetic.monitor_ids = collectArray(dt.entity.synthetic_test),execution_number = sum(arraySum(execution_series)),executions = sum(execution_series[]) },by:{timeframe, interval, result.status.message, result.status.code}
The visualized response looks like this:
Suppose you need to evaluate frontend availability and performance.
The following query uses the dt.synthetic.browser.availability
, dt.synthetic.browser.step.duration
, and dt.synthetic.browser.executions
metrics together with the dt.synthetic.monitored_entity_ids
dimension.
timeseries {availability_series = avg(dt.synthetic.browser.availability),performance_series = avg(dt.synthetic.browser.step.duration),},by:{ dt.synthetic.monitored_entity_ids }| expand entity = dt.synthetic.monitored_entity_ids| filter isNotNull(entity)| fieldsAdd entity_name = entityName(entity, type:"dt.entity.application")| fieldsAdd frontend_name = coalesce(entity_name, entity)| summarize { frontend_name = takeLast(frontend_name),availability = avg(arrayAvg(availability_series)),performance = avg(arrayAvg(performance_series))}, by: {dt.entity.application = entity}| sort availability asc, performance desc| limit 10| fieldsRename `Frontend` = frontend_name,`Availability` = availability,`Average event duration` = performance
In the example visualized response, you can see the breakdown of availability and an average event duration time for 10 frontend applications, sorted by availability with the lowest on the top.