Frequently asked questions about AI Observability and Dynatrace

  • Latest Dynatrace
  • Overview
  • 8-min read
  • Published Mar 12, 2026

This FAQ page provides answers to your most common questions about how AI Observability works within Dynatrace.

How do I instrument my services and send data?

Use the AI Observability AI Observability onboarding page to configure OpenTelemetry/OpenLLMetry, define permissions, sampling strategies, and tokens. It includes scenario‑based guidance (for example, "data not in", "no access") and validation tools to confirm successful ingestion.

Which metrics are available?

Within AI Observability AI Observability, use the Service Health page to see the following metrics.

  • Volume
  • Errors
  • Success/failure rates
  • Latency/traffic by model
  • Tokens (input/completion/total)
  • Cost (per model, input/output totals)
  • Guardrails (invocation counts and provider‑specific dimensions)

Views are customizable, and you can add DQL‑based metrics.

How do guardrail metrics work? Who provides guardrail runtime protection?

Dynatrace does not execute or enforce guardrail runtime protection. Guardrail enforcement happens at the model/provider level during inference (for example, Amazon Bedrock Guardrails). These then exposes results (such as whether a guardrail intervened) via response payloads and/or provider metrics.

AI Observability AI Observability instruments these provider signals and surfaces them for monitoring and analysis. This includes pre-filled alert creation flows and real‑time monitoring with customizable views.

You need to configure guardrails with your model provider. Once configured, Dynatrace ingests and displays the resulting guardrail outcomes and metrics so you can observe behavior and trends centrally.

You can build on top of these guardrail metrics in Dynatrace just like with other AI observability signals: create custom alerts and notifications, add tiles to dashboards, and trigger workflows.

Does Dynatrace support agent frameworks and protocols?

Yes. OpenTelemetry and OpenLLMetry integrations cover:

  • Amazon Bedrock Strands and AgentCore.
  • OpenAI agents.
  • Gemini agents.
  • SDKs like Google ADK, AWS Strands, Agentcore.
  • Protocol support for MCP to monitor multi‑agent communication.

For examples in GitHub, see Dynatrace AI Agent instrumentation examples.

Can I configure proactive alerts?

Yes. You can create custom alerts directly in AI Observability AI Observability. These include context with pre‑filled fields, embed notifications (Slack/email), and link back to investigate. You also get a centralized view to review all related alerts that you and your teams have created.

What if I am already using ready-made dashboards?

You can continue using your existing ready‑made dashboards, which remain available until the AI Observability AI Observability is generally available. The dashboards are tagged for discoverability and include navigation that redirects into AI Observability AI Observability for deeper, contextual analysis. For a more integrated and centralized workflow (instrumentation guidance, service health, proactive alerts, and in‑context prompt/log/trace debugging), we recommend using AI Observability AI Observability as your primary entry point, because it provides a dedicated end‑to‑end experience purpose‑built for AI workloads. Dashboards alone can be limited or inconsistent for GenAI‑specific workflows.

Does AI observability generate additional query cost?

Some out-of-the-box AI Observability dashboards use span queries, which consume Traces powered by Grail - Query. This is true even if AI Observability isn’t fully configured yet, or the dashboards show no data.

To control your trace consumption, you can:

  • Use the sampling variable on these dashboards (where available) to reduce the number of spans queried.
  • Restrict access to exploratory dashboards only for relevant users.
  • Prefer metrics-based tiles and views when possible.

Note that we're currently working on reducing costs for both AI Observability AI Observability and Dashboards, by moving away from span queries.

Related tags
AI Observability