Davis DQL examples

These examples illustrate how to build powerful and flexible health dashboards by using DQL to slice and dice all Davis reported problems and events.

Davis problems represent results that originate from the Davis root-cause analysis runs. In Grail, Davis problems and their updates are stored as Grail events.

Davis events represent raw events that originate from various anomaly detectors within Dynatrace or within the OneAgent. Examples here are OneAgent-detected CPU saturation events or high garbage collection time events.

Count the total number of problems in the last 24 hours

  • Fetches table dt.davis.problems.
  • Uses the summarize DQL command to get the total number.
  • The event.id holds the unique problem ID, which is stable across all refreshes and updates that Davis reports for the same problem.
fetch dt.davis.problems, from:now()-24h, to:now()
| summarize {problemCount = countDistinct(event.id)}

Query result

problemCount

415

Count the current number of active problems

  • Fetches table dt.davis.problems.
  • Groups the result by the unique event.id field, which contains the problem ID.
  • Filters out all problems that are no longer in state ACTIVE. To do this, the DQL command takeLast of the field event.status receives the last state.
fetch dt.davis.problems
| filter event.status == "ACTIVE"
| summarize {activeEvents = countDistinct(event.id)}

Query result

activeProblems

15

Chart the number of problems in the last 7 days to identify a trend within your environment stability

  • Fetches table dt.davis.problems.
  • Counts in a resolution of 6-hour bins.
fetch dt.davis.problems, from:now()-7d
| makeTimeseries count(), time:{timestamp}

Query result

timeframe

interval

count

start: 22/11/2023, 11:00 end: 29/11/2023, 12:00

6 h

1.000, 4.000, 1.000, null, 1.000, 3.000, null, null, 3.000, 4.000

Identify the top 10 problem-affected entities within your environment

  • Fetches table dt.davis.problems.
  • Expands the arrays field containing all affected entity IDs into individual fields.
  • Counts all unique problems grouped by the affected entity IDs.
  • Sorts by that problem count.
  • Returns the top 10 entity IDs.
fetch dt.davis.problems
| expand affected_entity_ids
| summarize by:{affected_entity_ids}, count = countDistinct(display_id)
| sort count, direction:"descending"
| limit 10

Query result

affected_entity_ids

count

HOST-A9449CACDE12B2BF

10

SERVICE-5624DD59D74FF453

5

PROCESS_GROUP_INSTANCE-3184C659684130C7

3

Fetch all problems for a host with the name "myhost"

A join with entity attributes is performed with the goal to filter all problems with a given host name.

  • Fetches table dt.davis.problems.
  • Expands the arrays field containing all affected entity IDs into individual fields.
  • Does a topology and entity lookup on the affected_entity_ids field.
  • Enriches the resulting records with two entity fields that are prefixed with host.: host.id and host.name.
  • Applies a filter for the host name myhost.
fetch dt.davis.problems
| expand affected_entity_ids
| lookup sourceField:affected_entity_ids, lookupField:id, prefix:"host.", [
fetch dt.entity.host | fields id, name = entity.name
]
| filter host.name == "myhost"
| limit 3

Query result

timestamp

affected_entity_ids

host.id

host.name

display_id

5/31/2023, 1:31:39 PM

HOST-27D70086952122CF

HOST-27D70086952122CF

myhost

P-23054243

Load the last state of a given problem

A join with entity attributes is performed with the goal to filter all problems with a given host name.

  • Fetches table dt.davis.problems.
  • Filters by the unique display identifier of the problem.
fetch dt.davis.problems
| filter display_id == "P-24051200"

Query result

timestamp

affected_entity_ids

host.id

host.name

display_id

5/31/2023, 1:31:39 PM

HOST-27D70086952122CF

HOST-27D70086952122CF

myhost

P-23053506

Load all active problems and exclude all those that are marked as duplicates

Fetch all active problems that were not marked as duplicates. Because the duplicate flag appears during the lifecycle of a problem, the update events need to be sorted by timestamp and then summarized by taking the last state of the duplicate and status fields. Only after sorting them by timestamp is it possible to correctly apply the filter.

  • Fetches table dt.davis.problems.
  • Filters out problems that are marked as duplicates.
  • Filters out problems that were closed already.
fetch dt.davis.problems
| filter event.status == "ACTIVE" and not dt.davis.is_duplicate == "true"

Query result

display_id

status

id

duplicate

P-230910385

ACTIVE

P-230910385

false

Calculate the mean time to resolve for problems over time

This example shows how to calculate the mean time necessary to resolve all the reported problems by summarizing the delta between start and end of each problem over time.

  • Fetches table dt.davis.problems.
  • Flattens the problem fields into the record.
  • Filters out all closed problems.
  • Filters out all frequent and duplicate problems.
  • Converts the values into a time series of averages over time.
fetch dt.davis.problems, from:now()-7d
| filter event.status == "CLOSED"
| filter dt.davis.is_frequent_event == false and dt.davis.is_duplicate == false and maintenance.is_under_maintenance == false
| makeTimeseries `AVG Problem duration in hours` = avg(toLong(resolved_problem_duration)/3600000000000.0), time:event.end

Show a chart of the concurrently open problems over time

This example shows how to query data for a chart of the concurrently open problems over time by filling all the resolution gaps with the spread command.

  • Fetches table dt.davis.problems.
  • Creates a time series out of the problem count.
  • Fills the gaps between start and end timestamps of a problem with the correct count by using the spread command.
fetch dt.davis.problems
| makeTimeseries count = count(), spread: timeframe(from: event.start, to: coalesce(event.end, now()))

Chart the number of CPU saturation and high-memory events in the last 7 days

  • Fetches table dt.davis.events for the last 7 days.
  • Counts in a resolution of 60-minute bins.
fetch dt.davis.events, from:now()-7d, to:now()
| filter event.kind == "DAVIS_EVENT"
| filter event.type == "OSI_HIGH_CPU" or event.type == "OSI_HIGH_MEMORY"
| summarize count = count(), by: {`60m interval` = bin(timestamp, 60m)}

Query result

60min interval

count

5/25/2023, 3:00 PM

146

5/25/2023, 4:00 PM

312

5/25/2023, 5:00 PM

201