Health alerts for frontends

  • Latest Dynatrace
  • How-to guide
  • 6-min read
  • Published Mar 26, 2026

Health alerts detect critical anomalies in key frontend metrics, trigger problem creation, and enable root-cause analysis. We recommended applying all available health alerts for your web or mobile frontends to detect critical slowdowns or errors.

This guide walks you through configuring health alerts for existing frontends, explains the available alert keys and detection parameters, and highlights key differences from RUM Classic alerting.

Before you begin

Prerequisites
  • You have the permissions described in New RUM Experience permissions.
  • A service user is required for the anomaly detection analyzer to run queries on behalf of. The service user requires the following permissions: storage:metrics:read, storage:buckets:read, and davis:analyzers:execute. You can select or create one during the configuration flow.

Available alert keys

The alert keys you can configure depend on the frontend type.

Alert keyWebMobileAnomaly detection analyzer

Largest Contentful Paint (LCP)

Yes

Auto-adaptive

Interaction to Next Paint (INP)

Yes

Auto-adaptive

Cumulative Layout Shift (CLS)

Yes

Auto-adaptive

Request errors

Yes

Yes

Seasonal baseline

Exceptions

Yes

Yes

Seasonal baseline

Crashes

Yes

Seasonal baseline

App start duration

Yes

Auto-adaptive

Synthetic outage alerts are configured separately in Synthetic Synthetic and are displayed alongside health alerts in Experience Vitals Experience Vitals.

Configure health alerts

You can configure health alerts at any time from the frontend settings. Alerts can also be set up during frontend creation—see the initial setup guides for web and mobile frontends.

To configure health alerts for an existing frontend

  1. In Experience Vitals Experience Vitals, go to the Explorer view and select the web or mobile frontend you want to configure.
  2. Go to the Settings tab.
  3. Select Health Alert.
  4. Select Add New alert to create a new alert. To edit an existing alert, select it from the list.
  5. In Set scope, select the Alert key for the metric you want to monitor.
  6. Set Actor to a service user.
  7. Optional To customize the anomaly detection parameters, switch to Advanced mode and select Next to expand Define alert condition. For details, see Advanced parameters.
  8. Select Next to expand Add details.
  9. Optional In Title, adapt the default title for the alert.
  10. Select Add.

You can also enable or disable existing alerts using the toggle in the alerts table, or select an alert to edit its configuration.

In addition, you can manage health alerts via the Settings API. For details, see Settings API - Frontend health alerts schema table.

The maximum number of health alerts per environment is 300. This limit applies across all frontends in your environment. When the limit is reached, the UI returns an error and you must delete an existing alert before saving a new one.

Advanced parameters

When you add or edit an alert from the frontend Settings tab, you can switch to Advanced mode to customize the anomaly detection behavior. The available parameters depend on the anomaly detection analyzer used for the selected alert key. For a detailed explanation of how each model calculates thresholds and evaluates violations, see Anomaly detection configuration.

Auto-adaptive threshold

Used for: LCP, INP, CLS, App start duration.

ParameterDescriptionDefaultRange

Number of signal fluctuations

Number of times the signal fluctuation (interquartile range) is added to the baseline to produce the threshold. Higher values reduce sensitivity. See Auto-adaptive thresholds for anomaly detection for more.

1

1–10

Violating samples

Number of one-minute samples within the sliding window that must violate the threshold before an alert is raised.

3

1–60

Sliding window

Number of one-minute samples per 5-minute detection cycle. Must be greater than violating samples.

5

1–60

Dealerting samples

Number of one-minute normal samples that must remain within the threshold to close the event.

5

1–60

Seasonal baseline

Used for: Request errors, Exceptions, Crashes.

ParameterDescriptionDefaultRange

Tolerance

Controls the width of the confidence band around the seasonal baseline. Higher values produce a broader band, leading to fewer triggered events. See Seasonal baseline for more.

4

1–10

Violating samples

Number of one-minute samples within the sliding window that must violate the threshold before an alert is raised.

3

1–60

Sliding window

Number of one-minute samples per 5-minute detection cycle. Must be greater than violating samples.

5

1–60

Dealerting samples

Number of one-minute normal samples that must remain within the threshold to close the event.

5

1–60

For an event to close, both the violating samples and the dealerting samples criteria must be met. See Anomaly detection configuration for more.

How health alerts appear

In Problems app - new Problems

Health alerts trigger problem creation, impact analysis, and root-cause analysis, and therefore appear in Problems app - new Problems alongside issues from other sources. From there, you can continue your investigation to Experience Vitals Experience Vitals or Error Inspector Error Inspector as described below. See Investigate and remediate active problems for more.

In Experience Vitals Experience Vitals

In the Explorer

  • The Health alerts column shows one chip per alert group:

    • Availability, Core web vitals, and Errors for web frontends
    • Availability, Errors, Crashes, and Slowdown for mobile frontends.
  • Each group maps to the following alert keys:

    • Core web vitals: LCP, INP, CLS
    • Errors: Request errors, Exceptions
    • Crashes: Crashes
    • Slowdown: App start duration
    • Availability: Synthetic outage alerts (configured separately in Synthetic Monitoring)
  • Each chip reflects the highest-severity status for its group:

    • critical for active events
    • closed when all events in the timeframe are resolved.
  • Chips appear only when alert events exist in the selected timeframe; frontends with no events show .

  • You can filter by Alert status using the Quick filters bar.

In the frontend details header

  • Chips for all relevant alert groups (same groups as in the Explorer view) are always visible, regardless of which tab is open.

  • Each chip reflects the current status:

    • critical when at least one event is active
    • closed when all events are resolved
    • healthy when configured and no events
    • unmonitored when the alert group is not configured.

In the performance analysis

  • Health alerts are also shown as annotations in affected metric charts (Core web vitals, Error rate, Crash rate, App start duration):
    • critical for active events
    • closed when all events are resolved.
  • Affected metrics are highlighted in the Pages and Views breakdowns:
    • critical for active events. During problem investigation, only the metrics affected by the investigated problem are marked as critical.

Select any chip or annotation to open the health alert overlay, which shows the details, affected metric, and detection time of all events in that group. From the overlay you can view the metric chart, investigate a linked problem in context, or edit the alert configuration.

If a health alert event is linked to a problem, select Investigate Problem to open a problem-scoped view within Experience Vitals Experience Vitals. The timeframe adjusts to the problem's duration so you can assess the frontend impact in context.

In Error Inspector Error Inspector

When a health alert triggers a problem related to frontend errors, you can open the problem in Problems app - new Problems and select Analyze errors. This takes you to Error Inspector Error Inspector with the problem context passed automatically. The Overview and Explorer pages are scoped to only show errors from the affected frontends, and the relevant error type (request errors, exceptions, or crashes) is pre-selected based on the problem's category.

For details, see Error Inspector.

Differences from RUM Classic alerting

Health alerts in Experience Vitals Experience Vitals replace the per-entity anomaly detection available in RUM Classic. Key differences:

AspectRUM Classic alertingHealth alerts (Latest)

Scope

Applied globally or per entity type

Configured per frontend

Metrics

Browser, geolocation, and OS breakdown (baseline cube)

Aggregate metric per frontend, except for mobile app start duration, which splits by app start type

Detection analyzers

Static and auto-adaptive

Seasonal baseline and auto-adaptive

Configuration

Classic UI or Settings API

UI or Settings API

Alert visualization

Problems

Problems + health chips and overlays

Service user

Not required

Required for analyzer execution

RUM Classic anomaly detection and health alerts operate independently and can be used in parallel. We recommend running both during the transition to Latest Dynatrace: health alerts provide more accurate user and session impact in problems and per-frontend alerting granularity, while Classic anomaly detection retains page-level thresholds and dimensional segmentation (browser, geolocation) not yet available in health alerts. If you have both active for the same metric, the resulting problems will be merged by Dynatrace Intelligence when they share the same category and entity, or may fire separately—in either case, creating redundancy in your alert coverage. To avoid this, review your Classic anomaly detection settings after configuring health alerts and disable any Classic rules that overlap.

Troubleshooting

If alerts are not triggering as expected:

  • Verify the service user: the selected service user must have the permissions listed in the prerequisites. If the service user was deleted or its permissions changed, the analyzer cannot execute.
  • Check the timeframe: health alert events are displayed based on the timeframe selected in the app. If you select a timeframe before the alert was active, no events will appear.
Related tags
Digital Experience