OpenTelemetry provides a vendor-neutral standard for collecting traces and metrics from AI applications. With the GenAI semantic conventions, OpenTelemetry defines a consistent way to capture AI-specific attributes such as model names, token counts, latency, and cost metrics across different LLM providers.
Dynatrace fully supports OpenTelemetry, allowing you to send AI observability data directly to your Dynatrace environment using the OTLP API endpoints. This approach gives you flexibility to use any OpenTelemetry-compatible instrumentation library or build custom instrumentation.
This getting started guide is for:
By following this guide, you will learn:
In order for this to work, you need to have:
A running AI app or AI demo app.
Dynatrace SaaS with a Dynatrace Platform Subscription (DPS) license that has Traces powered by Grail, Metrics powered by Grail, and Log Analytics enabled.
OTLP ingestion enabled, see OpenTelemetry and Dynatrace.
An OpenAPI platform API key.
A Dynatrace API token the following scopes, see Platform tokens.
metrics.ingest)logs.ingest)openTelemetryTrace.ingest)It's helpful to have some basic knowledge of:
You can use Python or Node.js to instrument your AI application directly with the OpenTelemetry SDK.
Install the OpenTelemetry SDK and Collector Exporter. Run the following command in your terminal.
pip install opentelemetry-sdk opentelemetry-exporter-otlp-proto-http
Optional You can also run the OpenTelemetry auto-instrumentation.
pip install opentelemetry-distro opentelemetry-exporter-otlpopentelemetry-bootstrap -a install
Initialize the OpenTelemetry SDK. Add the following code at the beginning of your main file.
from opentelemetry import tracefrom opentelemetry.sdk.resources import Resourcefrom opentelemetry.sdk.trace import TracerProviderfrom opentelemetry.sdk.trace.export import BatchSpanProcessorfrom opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporterresource = Resource.create({"service.name": "<your-service>"})provider = TracerProvider(resource=resource)trace.set_tracer_provider(provider)exporter = OTLPSpanExporter(endpoint="https://<YOUR_ENV>.live.dynatrace.com/api/v2/otlp/v1/traces",headers={"Authorization": "Api-Token <YOUR_DT_API_TOKEN>"},)provider.add_span_processor(BatchSpanProcessor(exporter))tracer = trace.get_tracer(__name__)
The OpenTelemetry GenAI semantic conventions standardize the attributes captured for generative AI operations. To make sure that your telemetry data follows these conventions, add the following code to your application.
For more information about semantic conventions, see GenAI semantic conventions.
from opentelemetry.trace import SpanKindwith tracer.start_as_current_span("chat gpt-5", kind=SpanKind.CLIENT) as span:span.set_attribute("gen_ai.operation.name", "chat")span.set_attribute("gen_ai.provider.name", "openai")span.set_attribute("gen_ai.request.model", "gpt-5.2")span.set_attribute("gen_ai.request.temperature", 0.7)response = openai_client.chat.completions.create(model="gpt-4",messages=messages,temperature=0.7,)span.set_attribute("gen_ai.response.model", response.model)span.set_attribute("gen_ai.response.id", response.id)span.set_attribute("gen_ai.usage.input_tokens", response.usage.prompt_tokens)span.set_attribute("gen_ai.usage.output_tokens", response.usage.completion_tokens)
Now that you've set up your AI app to send observability data directly to Dynatrace, you can: