Set up Davis alerts based on metrics

  • Tutorial
  • 4min

Ingested logs can be triggers for opening new Davis problems.

Using a combination of metrics based on logs and Davis anomaly detectors, you can use the power of different Davis analyzers to address use cases from simple threshold-based alerting to seasonal baselines, for example:

  • Alert when the average count of matching records exceeds a specific number within a defined time period.
  • Alert when the value of a metric is abnormal, without setting a static threshold.

Follow this guide to learn more about alerting with metrics based on logs.

If you don't need to set thresholds, you should follow the instructions in Set up Davis alerts based on events.

Prerequisites

Steps

In this example we will open a new Davis Problem when certain records, which contain a specific phrase, are ingested and exceed a static threshold.

You can find alerts by opening Logs Logs and using the following DQL query.

fetch logs
| filter matchesPhrase(content, "Dropping data because sending_queue is full")
| sort timestamp desc

If your DQL query uses parse, fieldAdd, or other transformations, you should add a processing rule to set those fields on ingest.

Add metric extraction configuration in OpenPipeline.

  1. Open Settings Settings > Process and contextualize > OpenPipeline > Logs and select the Pipelines tab.

  2. Find the pipeline you want to modify, or add a new pipeline.

  3. Select > Edit. The pipeline configuration page appears.

  4. Select Metric extraction tab.

  5. Set

    • The metric name and ID.

    • The DQL matcher. A matcher sets the condition for the event that is to be extracted. It is a subset of filtering conditions in a single DQL statement.

      In Matching condition, use the matcher as shown below.

      matchesPhrase(content, "Dropping data because sending_queue is full")

If you use Segments or your permissions are set at the record level, you should include those conditions in the matcher.

There are situations when a matcher can't be easily extracted from a DQL statement. In these cases, you can create log alerts for a log event or summary of log data.

  1. Add dimensions. For most logs, you can add automated correlation to entities in Davis root cause analysis. To do this, add a dt.source_entity dimension or any other field that contains an entity identifier.

Go to Davis Anomaly Detection - new Davis Anomaly Detection and create a new anomaly detector.

This section describes how to create a simple anomaly detector.

If you need to set additional advanced properties and fine-tune your alert, use the Advanced mode.

  1. Set the scope for your alert.

  2. Use DQL syntax to point the metric you created. To have your alert connected to monitored entity make sure to add by: {dt.source_entity}.

  3. Define the alerting conditions under which a new Davis event will be generated. You can pick different Davis anomaly detection analyzers.

    • Use Suggest values to find the right threshold.
    • Use Preview to get an estimation of how many alerts would have been generated in the last two hours.
  4. Finally set the event details like title and description.

When the alerting conditions are met you will see a new problem in Problems app - new Problems.

Conclusion

Here's when to use Davis Anomaly Detector with metrics based on logs:

  • You need to set thresholds or use other machine learning analyzers to trigger alerts.
  • When you want to alert on anomalies in value coming from a log field like http.response_time.
  • Metric analyzers are triggered every minute so it’s not a real-time alerting method.
  • Metric dimensions have low cardinality.

Detected anomalies can trigger automations using simple workflows as described in Create a simple workflow in Dynatrace Workflows.

Related tags
Log AnalyticsLog Analytics