Health alerts detect critical anomalies in key frontend metrics, trigger problem creation, and enable root-cause analysis. We recommended applying all available health alerts for your web or mobile frontends to detect critical slowdowns or errors.
This guide walks you through configuring health alerts for existing frontends, explains the available alert keys and detection parameters, and highlights key differences from RUM Classic alerting.
storage:metrics:read, storage:buckets:read, and davis:analyzers:execute. You can select or create one during the configuration flow.The alert keys you can configure depend on the frontend type.
| Alert key | Web | Mobile | Anomaly detection analyzer |
|---|---|---|---|
Largest Contentful Paint (LCP) | Yes | — | Auto-adaptive |
Interaction to Next Paint (INP) | Yes | — | Auto-adaptive |
Cumulative Layout Shift (CLS) | Yes | — | Auto-adaptive |
Request errors | Yes | Yes | Seasonal baseline |
Exceptions | Yes | Yes | Seasonal baseline |
Crashes | — | Yes | Seasonal baseline |
App start duration | — | Yes | Auto-adaptive |
Synthetic outage alerts are configured separately in
Synthetic and are displayed alongside health alerts in
Experience Vitals.
You can configure health alerts at any time from the frontend settings. Alerts can also be set up during frontend creation—see the initial setup guides for web and mobile frontends.
To configure health alerts for an existing frontend
Experience Vitals, go to the Explorer view and select the web or mobile frontend you want to configure.You can also enable or disable existing alerts using the toggle in the alerts table, or select an alert to edit its configuration.
In addition, you can manage health alerts via the Settings API. For details, see Settings API - Frontend health alerts schema table.
The maximum number of health alerts per environment is 300. This limit applies across all frontends in your environment. When the limit is reached, the UI returns an error and you must delete an existing alert before saving a new one.
When you add or edit an alert from the frontend Settings tab, you can switch to Advanced mode to customize the anomaly detection behavior. The available parameters depend on the anomaly detection analyzer used for the selected alert key. For a detailed explanation of how each model calculates thresholds and evaluates violations, see Anomaly detection configuration.
Used for: LCP, INP, CLS, App start duration.
| Parameter | Description | Default | Range |
|---|---|---|---|
Number of signal fluctuations | Number of times the signal fluctuation (interquartile range) is added to the baseline to produce the threshold. Higher values reduce sensitivity. See Auto-adaptive thresholds for anomaly detection for more. | 1 | 1–10 |
Violating samples | Number of one-minute samples within the sliding window that must violate the threshold before an alert is raised. | 3 | 1–60 |
Sliding window | Number of one-minute samples per 5-minute detection cycle. Must be greater than violating samples. | 5 | 1–60 |
Dealerting samples | Number of one-minute normal samples that must remain within the threshold to close the event. | 5 | 1–60 |
Used for: Request errors, Exceptions, Crashes.
| Parameter | Description | Default | Range |
|---|---|---|---|
Tolerance | Controls the width of the confidence band around the seasonal baseline. Higher values produce a broader band, leading to fewer triggered events. See Seasonal baseline for more. | 4 | 1–10 |
Violating samples | Number of one-minute samples within the sliding window that must violate the threshold before an alert is raised. | 3 | 1–60 |
Sliding window | Number of one-minute samples per 5-minute detection cycle. Must be greater than violating samples. | 5 | 1–60 |
Dealerting samples | Number of one-minute normal samples that must remain within the threshold to close the event. | 5 | 1–60 |
For an event to close, both the violating samples and the dealerting samples criteria must be met. See Anomaly detection configuration for more.
ProblemsHealth alerts trigger problem creation, impact analysis, and root-cause analysis, and therefore appear in
Problems alongside issues from other sources. From there, you can continue your investigation to
Experience Vitals or
Error Inspector as described below. See Investigate and remediate active problems for more.
Experience VitalsThe Health alerts column shows one chip per alert group:
Each group maps to the following alert keys:
Each chip reflects the highest-severity status for its group:
Chips appear only when alert events exist in the selected timeframe; frontends with no events show –.
You can filter by Alert status using the Quick filters bar.
Chips for all relevant alert groups (same groups as in the Explorer view) are always visible, regardless of which tab is open.
Each chip reflects the current status:
Select any chip or annotation to open the health alert overlay, which shows the details, affected metric, and detection time of all events in that group. From the overlay you can view the metric chart, investigate a linked problem in context, or edit the alert configuration.
If a health alert event is linked to a problem, select Investigate Problem to open a problem-scoped view within
Experience Vitals. The timeframe adjusts to the problem's duration so you can assess the frontend impact in context.
Error InspectorWhen a health alert triggers a problem related to frontend errors, you can open the problem in
Problems and select Analyze errors. This takes you to
Error Inspector with the problem context passed automatically. The Overview and Explorer pages are scoped to only show errors from the affected frontends, and the relevant error type (request errors, exceptions, or crashes) is pre-selected based on the problem's category.
For details, see Error Inspector.
Health alerts in
Experience Vitals replace the per-entity anomaly detection available in RUM Classic. Key differences:
| Aspect | RUM Classic alerting | Health alerts (Latest) |
|---|---|---|
Scope | Applied globally or per entity type | Configured per frontend |
Metrics | Browser, geolocation, and OS breakdown (baseline cube) | Aggregate metric per frontend, except for mobile app start duration, which splits by app start type |
Detection analyzers | Static and auto-adaptive | Seasonal baseline and auto-adaptive |
Configuration | Classic UI or Settings API | UI or Settings API |
Alert visualization | Problems | Problems + health chips and overlays |
Service user | Not required | Required for analyzer execution |
RUM Classic anomaly detection and health alerts operate independently and can be used in parallel. We recommend running both during the transition to Latest Dynatrace: health alerts provide more accurate user and session impact in problems and per-frontend alerting granularity, while Classic anomaly detection retains page-level thresholds and dimensional segmentation (browser, geolocation) not yet available in health alerts. If you have both active for the same metric, the resulting problems will be merged by Dynatrace Intelligence when they share the same category and entity, or may fire separately—in either case, creating redundancy in your alert coverage. To avoid this, review your Classic anomaly detection settings after configuring health alerts and disable any Classic rules that overlap.
If alerts are not triggering as expected: