AI Observability provides end‑to‑end visibility for AI workloads, across services, LLMs, agents, and protocols.







To use
AI Observability, you need:
A Dynatrace Platform Subscription license with the following rate-card capabilities:
The following permissions:
davis:analyzers:executeenvironment-api:entities:readstorage:entities:readstorage:metrics:readstorage:spans:readSome out-of-the-box AI Observability dashboards use span queries, which consume Traces powered by Grail - Query. This is true even if AI Observability isn’t fully configured yet, or the dashboards show no data.
To control your trace consumption, you can:
Note that we're currently working on reducing costs for both
AI Observability and Dashboards, by moving away from span queries.
AI Observability has an integrated onboarding flow that guides you through all the required steps to get started and start ingesting data.
You can get data from:
Additionally, you can instrument your AI applications and services directly using OpenTelemetry with GenAI semantic conventions for full control and standardized observability across your entire stack.
Here's how the different tabs in
AI Observability work, and what you'll use them for.
The tabs are: Overview, Service Health, and Explorer.
For information about GenAI concepts in Dynatrace, see Terms and concepts about AI Observability and GenAI in Dynatrace.
The Overview tab is your starting point to:
In this tab, you can:
Use the tiles to view your AI landscape at a glance. See model providers, agents, model versions, and services, plus activity such as LLM requests, token usage, and cost trends.
Select any tile to open the Service Health tab and drill down with deeper analysis. You can validate errors, review traffic and latency, monitor token and cost behavior, and observe guardrail outcomes.
Open ready‑made dashboards for popular AI services or select Browse all dashboards to find dashboards tagged with [AI Observability]. Dashboards include navigation that redirects back into the app for contextual analysis.
Service Health lets you get a unified view of the operational state of your AI services. It is organized into focused tabs, so you can move from a high-level pulse to root cause in a couple of clicks.
In this tab, you can:
Analyze all services, or quickly filter by service category or other predefined attributes.
See counts for services, models, and agents.
See model requests, token usage, average request duration, and overall cost.
Track errors with information such as success/failure rate, number of problems, counts and rate over time.
Monitor traffic and latency, and create alerts for regressions. Create alerts for latency regressions.
Analyze costs related to token usage, identify cost hot spots, and set proactive cost alerts.
Observe provider-reported guardrail outcomes.
Dynatrace does not enforce runtime guardrails. Providers expose these signals, which we capture and visualize.
Configure guardrails at the provider level for lowest latency and complexity.
The Explorer tab is the shared Dynatrace interface for monitoring and analyzing different technology domains. It defines a common layout with consistent filtering, perspectives, drill‑down navigation, and unified analysis.
For more AI Observability use cases, see Sample use cases for AI Observability and Dynatrace.
To create a new alert, select New alert on metrics-based tiles. (These tiles include, for example, Invocation error count, Invocation latency, Token count, Token usage forecast, and Overall guardrail activation.) The alert wizard opens up, and is pre‑filled with the current scope so you can fine‑tune thresholds and notifications.
To manage alerts, use the Manage all alerts action from any tab.
You can review, edit, and mute custom alerts created from Service Health cards and charts.
You can also create a new alert directly from most tiles.
For info about all custom alerts, capabilities, and limits, see
Anomaly Detection.
AI Observability includes integrations with Distributed Tracing, and traces are with GenAI fields.
The trace list is pre-scoped and laid out so that only the relevant requests appear, and that GenAI context is front and center for faster investigation.
To view traces related to many of the
AI Observability tiles and interactions:
AI ObservabilityAI Observability