The Dynatrace full-stack observability platform combined with Traceloop's OpenLLMetry OpenTelemetry SDK can seamlessly provide comprehensive insights into Large Language Models (LLMs) in production environments. By observing AI models, businesses can make informed decisions, optimize performance, and ensure compliance with emerging AI regulations.
OpenLLMetry supports AI model observability by capturing and normalizing key performance indicators (KPIs) from diverse AI frameworks. Utilizing an additional OpenTelemetry SDK layer, this data seamlessly flows into the Dynatrace environment, offering advanced analytics and a holistic view of the AI deployment stack.
Given the prevalence of Python in AI model development, OpenTelemetry serves as a robust standard for collecting observability data, including traces, metrics, and logs. While OpenTelemetry's auto-instrumentation provides valuable insights into spans and basic resource attributes, it falls short in capturing specific KPIs crucial for AI models, such as model name, version, prompt and completion tokens, and temperature parameter.
OpenLLMetry bridges this gap by supporting popular AI frameworks like OpenAI, HuggingFace, Pinecone, and LangChain. By standardizing the collection of essential model KPIs through OpenTelemetry, it ensures comprehensive observability. The open-source OpenLLMetry SDK, built on top of OpenTelemetry, enables thorough insights into your LLM application.
Because the collected data seamlessly integrates with the Dynatrace environment, users can analyze LLM metrics, spans, and logs in the context of all traces and code-level information. Maintained under the Apache 2.0 license by Traceloop, OpenLLMetry becomes a valuable asset for product owners, providing a transparent view of AI model performance.
Explore the high-level architecture illustrating how OpenLLMetry captures and transmits AI model KPIs to the Dynatrace environment, empowering businesses with unparalleled insights into their AI deployment landscape.
The use case below shows how to collect insights about an OpenAI LLM model that is built on top of the LangChain framework.
In this example, we demonstrate the implementation of a customizable LLM using OpenAI's cloud service and the LangChain framework. This compact example showcases the integration of LangChain to construct a template layer for the LLM model.
The primary objective of the AI model is to provide a concise executive summary of a company's business purpose. LangChain adds a layer of flexibility, enabling users to dynamically alter the company and define the maximum length of the AI-generated response.
To seamlessly export and analyze the collected data, configuration for Dynatrace OpenTelemetry involves setting up two crucial environment variables. First, generate a Dynatrace ingest API token with specific API v2 permission scopes (openTelemetryTrace.ingest
, metrics.ingest
, and logs.ingest
). Then, configure the environment variables with the following parameters.
TRACELOOP_BASE_URL=https://<YOUR_ENV>.live.dynatrace.com/api/v2/otlpTRACELOOP_HEADERS=Authorization=Api-Token%20<YOUR_DYNATRACE_ACCESS_TOKEN>
The process starts with the initialization of the Traceloop OpenLLMetry SDK. Subsequently, we annotate pivotal model tasks to enhance observability, as shown in the provided code snippet.
This comprehensive approach not only showcases the practical implementation of an LLM model but also emphasizes the importance of configuring Dynatrace OpenTelemetry for efficient data export and analysis, ensuring businesses have a robust system for AI model performance evaluation.
from traceloop.sdk import Traceloopimport osimport openaifrom langchain.llms import OpenAIfrom langchain.chat_models import ChatOpenAIfrom langchain.prompts import ChatPromptTemplatefrom traceloop.sdk.decorators import workflow, taskTraceloop.init(app_name="openai-obs", disable_batch=True)openai.api_key = os.getenv("OPENAI_API_KEY")@task(name="add_prompt_context")def add_prompt_context():llm = OpenAI(openai_api_key=openai.api_key)prompt = ChatPromptTemplate.from_template("explain the business of company {company} in a max of {length} words")model = ChatOpenAI()chain = prompt | modelreturn chain@task(name="prep_prompt_chain")def prep_prompt_chain():return add_prompt_context()@workflow(name="ask_question")def prompt_question():chain = prep_prompt_chain()return chain.invoke({"company": "dynatrace", "length" : 50})if __name__ == "__main__":print(prompt_question())
Next, we execute our AI model and inquire about the company Dynatrace, yielding the subsequent 50-token response outlined below.
>python chaining.pyTraceloop exporting traces to https://<MY_ENV>.live.dynatrace.com/api/v2/otlp, authenticating with custom headerscontent='Dynatrace is a software intelligence company that provides monitoring and analytics solutions for modern cloud environments. They offer a platform that helps businesses optimize their software performance, improve customer experience, and accelerate digital transformation by leveraging AI-driven insights and automation.'
In the Dynatrace environment, you can track your AI model in real time, examine its model attributes, and assess the reliability and latency of each specific LangChain task, as demonstrated below.
The captured span by Traceloop automatically displays vital details, including the mode utilized by our LangChain model 'gpt-3-5-turbo', the model's invocation with a temperature parameter of 0.7, and the utilization of 53 completion tokens for this individual request.