OpenLLMetry captures and transmits AI model or agent KPIs to the Dynatrace environment, empowering businesses with unparalleled insights into their AI deployment landscape.
The collected data seamlessly integrates with the Dynatrace environment, so users can analyze LLM metrics, spans, and logs in the context of all traces and code-level information.

This getting started guide is for:
By following this guide, you'll learn:
In order for this to work, you need to have:
A running AI app or AI demo app.
Dynatrace SaaS with a Dynatrace Platform Subscription (DPS) license that has Traces powered by Grail, Metrics powered by Grail, and Log Analytics enabled.
OTLP ingestion enabled, see OpenTelemetry and Dynatrace.
An OpenAPI platform API key.
A Dynatrace API token the following scopes, see Platform tokens.
metrics.ingest)logs.ingest)openTelemetryTrace.ingest)It's helpful to have some basic knowledge of:
OpenLLMetry supports AI model observability by capturing and normalizing key performance indicators (KPIs) from diverse AI frameworks. Utilizing an additional OpenTelemetry SDK layer, this data seamlessly flows into the Dynatrace environment, offering advanced analytics and a holistic view of the AI deployment stack.
OpenTelemetry's auto-instrumentation provides valuable insights into spans and basic resource attributes. However, it doesn't capture specific KPIs crucial for AI models, such as model name, version, prompt and completion tokens, and temperature parameter.
OpenLLMetry bridges this gap by supporting popular AI frameworks like OpenAI, HuggingFace, Pinecone, and LangChain. By standardizing the collection of essential model KPIs through OpenTelemetry, it ensures comprehensive observability. The open-source OpenLLMetry SDK, built on top of OpenTelemetry, enables thorough insights into your LLM application.
In this example, we demonstrate the implementation of a customizable LLM using OpenAI's cloud service and the LangChain framework. This compact example showcases the integration of LangChain to construct a template layer for the LLM model.
Once you've configured your application, you can use Dynatrace to:
We can leverage OpenTelemetry to provide auto-instrumentation that collects traces and metrics of your AI workloads, particularly OpenLLMetry. Install it with the following command.
OpenLLMetry provides auto-instrumentation for popular AI frameworks and automatically collects GenAI semantic conventions. You can use either Python or Node.js.
Currently, OpenLLMetry for Node.js doesn't support metrics.
Install the OpenLLMetry SDK. Run the following command in your terminal.
pip install traceloop-sdk
Initialize the tracer. Add the following code at the beginning of your main file.
from traceloop.sdk import Traceloopheaders = { "Authorization": "Api-Token <YOUR_DT_API_TOKEN>" }Traceloop.init(app_name="<your-service>",api_endpoint="https://<YOUR_ENV>.live.dynatrace.com/api/v2/otlp", # or OpenTelemetry Collector URLheaders=headers)
Replace the placeholders with relevant values:
<YOUR_ENV>: Your Dynatrace environment
For more information, see Base URLs.<YOUR_DT_API_TOKEN>: The token that you created in the previous step.<your-service>: Your app's name.You can copy-paste the example code block below directly into your application's code. Just replace the placeholders with the relevant values.
(The primary objective of this AI model is to provide a concise executive summary of a company's business purpose. LangChain adds a layer of flexibility, enabling users to dynamically alter the company and define the maximum length of the AI-generated response.)
The comprehensive approach used here showcases the practical implementation of an LLM model, and emphasizes the importance of configuring Dynatrace OpenTelemetry for efficient data export and analysis. The result is a robust system that you can use to assess AI model performance.
import osimport openaifrom langchain.llms import OpenAIfrom langchain.chat_models import ChatOpenAIfrom langchain.prompts import ChatPromptTemplatefrom traceloop.sdk import Traceloopfrom traceloop.sdk.decorators import workflow, taskos.environ['OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE'] = "delta"headers = { "Authorization": "Api-Token <YOUR_DT_API_TOKEN>" }Traceloop.init(app_name="<your-service>",api_endpoint="https://<YOUR_ENV>.live.dynatrace.com/api/v2/otlp",headers=headers,disable_batch=True)openai.api_key = os.getenv("OPENAI_API_KEY")@task(name="add_prompt_context")def add_prompt_context():prompt = ChatPromptTemplate.from_template("explain the business of company {company} in a max of {length} words")model = ChatOpenAI()chain = prompt | modelreturn chain@task(name="prep_prompt_chain")def prep_prompt_chain():return add_prompt_context()@workflow(name="ask_question")def prompt_question():chain = prep_prompt_chain()return chain.invoke({"company": "dynatrace", "length" : 50})if __name__ == "__main__":print(prompt_question())
You can point the OTLP endpoint into your Collector and to any ActiveGate endpoint.
For more information, see the upstream documentation at Sampling.
Execute your AI model and inquire about the company Dynatrace.
The code block below shows an example output with the subsequent 50-token response.
> python chaining.pyTraceloop exporting traces to https://<MY_ENV>.live.dynatrace.com/api/v2/otlp, /authenticating with custom headers content='Dynatrace is a software /intelligence company that provides monitoring and analytics solutions /for modern cloud environments. They offer a platform that helps /businesses optimize their software performance, improve customer /experience, and accelerate digital transformation by leveraging /AI-driven insights and automation.'
To see traceloop traces related to the AI model that we just configured, go to Distributed Tracing and search for the ask_question workflow.
Span captured by OpenLLMetry automatically display vital details, including:
gpt-4o-mini.0.7.
Now that you've set up your AI app to send observability data directly to Dynatrace, you can: