Ollama

Ollama is a platform that allows users to run and manage AI models locally on their own machines. It provides tools for deploying, interacting with, and fine-tuning various AI models, particularly those related to natural language processing.

Monitoring your Ollama models via Dynatrace, you can get prompt and completion recording, error tracking, performance metrics of your AI services, and more.

Ollama Observability

Spans

The following attributes are available for GenAI Spans.

Attribute
Type
Description
gen_ai.completion.0.content
string
The full response received from the GenAI model.
gen_ai.completion.0.role
string
The role used by the GenAI model.
gen_ai.prompt.0.content
string
The full prompt sent to the GenAI model.
gen_ai.prompt.0.role
string
The role setting for the GenAI request.
gen_ai.request.model
string
The name of the GenAI model a request is being made to.
gen_ai.response.model
string
The name of the model that generated the response.
gen_ai.system
string
The GenAI product as identified by the client or server instrumentation.
gen_ai.usage.completion_tokens
integer
The number of tokens used in the GenAI response (completion).
gen_ai.usage.prompt_tokens
integer
The number of tokens used in the GenAI input (prompt).
llm.request.type
string
The type of the operation being performed.