This FAQ page provides answers to your most common questions about how AI Observability works within Dynatrace.
Use the
AI Observability onboarding page to configure OpenTelemetry/OpenLLMetry, define permissions, sampling strategies, and tokens.
It includes scenario‑based guidance (for example, "data not in", "no access") and validation tools to confirm successful ingestion.
Within
AI Observability, use the Service Health page to see the following metrics.
Views are customizable, and you can add DQL‑based metrics.
Dynatrace does not execute or enforce guardrail runtime protection. Guardrail enforcement happens at the model/provider level during inference (for example, Amazon Bedrock Guardrails). These then exposes results (such as whether a guardrail intervened) via response payloads and/or provider metrics.
AI Observability instruments these provider signals and surfaces them for monitoring and analysis.
This includes pre-filled alert creation flows and real‑time monitoring with customizable views.
You need to configure guardrails with your model provider. Once configured, Dynatrace ingests and displays the resulting guardrail outcomes and metrics so you can observe behavior and trends centrally.
You can build on top of these guardrail metrics in Dynatrace just like with other AI observability signals: create custom alerts and notifications, add tiles to dashboards, and trigger workflows.
Yes. OpenTelemetry and OpenLLMetry integrations cover:
For examples in GitHub, see Dynatrace AI Agent instrumentation examples.
Yes.
You can create custom alerts directly in
AI Observability.
These include context with pre‑filled fields, embed notifications (Slack/email), and link back to investigate.
You also get a centralized view to review all related alerts that you and your teams have created.
You can continue using your existing ready‑made dashboards, which remain available until the
AI Observability is generally available.
The dashboards are tagged for discoverability and include navigation that redirects into
AI Observability for deeper, contextual analysis.
For a more integrated and centralized workflow (instrumentation guidance, service health, proactive alerts, and in‑context prompt/log/trace debugging), we recommend using
AI Observability as your primary entry point, because it provides a dedicated end‑to‑end experience purpose‑built for AI workloads.
Dashboards alone can be limited or inconsistent for GenAI‑specific workflows.
Some out-of-the-box AI Observability dashboards use span queries, which consume Traces powered by Grail - Query. This is true even if AI Observability isn’t fully configured yet, or the dashboards show no data.
To control your trace consumption, you can:
Note that we're currently working on reducing costs for both
AI Observability and Dashboards, by moving away from span queries.