Set up Alert Reduction Agent

  • Latest Dynatrace
  • How-to guide
  • 3-min read
  • Published Jan 28, 2026
  • Preview

The Alert Reduction Agent workflow minimizes alert fatigue by analyzing Grail-stored alert data. It identifies overalerting configurations using the dt.settings.object_id reference field.

To receive alert fatigue reports, enter your email address in the Alert Reduction Agent workflow. Through DQL queries, the workflow counts alerts per configuration and entity, highlighting noisy or excessive alerts. It then recommends optimizing alert settings by adjusting thresholds, sensitivity, or observation windows, or by switching alert models (for example, static thresholds versus seasonal baselines).

This process helps reduce noise, ensures meaningful alerts, and improves overall monitoring efficiency.

Prerequisites

Get started

1. Set up the workflow

  1. Enter your email address in the to field of the send_alert_fatique_report task to receive alert fatigue reports.
  2. This ensures you are notified about overalerting issues and can act on them promptly.

2. Analyze alerts per configuration

  1. Use the following DQL query to count how many alerts were triggered by each alert configuration:

    fetch dt.davis.events, from:-24h, to:now()
    | filter isNotNull(dt.settings.object_id)
    | summarize count=count(), by:{dt.settings.object_id, dt.settings.schema_id, event.name, event.category}
    | sort count desc
  2. This helps with identifying which configurations are generating the most alerts and may need optimization.

3. Analyze alerts per entity

  1. Use this DQL query to count how many alerts were triggered for a specific alert configuration by entity:
    fetch dt.davis.events, from:-24h, to:now()
    | filter dt.settings.object_id == "<specific_object_id>"
    | summarize count=count(), by:{dt.source_entity, dt.source.entity.name, event.name, event.category}
    | sort count desc
  2. This identifies entities responsible for excessive alerts, helping pinpoint overalerting sources.

4. Optimize spammy alerts

  1. Review the identified alert settings (for example, thresholds, DQL, observation windows).
  2. Apply the following optimizations to reduce noise:
    • Switch alert models: Use seasonal baselines instead of static thresholds for dynamic patterns.
    • Adjust thresholds: Modify sensitivity or thresholds to better align with expected behavior.
    • Expand observation windows: Increase the sliding window or required violating samples to filter out short-term noise.

5. Monitor and iterate

  1. Continuously monitor the optimized alerts and refine configurations as needed.
  2. Use the alert fatigue reports to track improvements in alerting efficiency and reduce unnecessary notifications.
Related tags
Dynatrace PlatformDynatrace AIGenerative AI for Workflows