Amazon Bedrock

Amazon Bedrock is a fully managed service that provides a single API to access and utilize various high-performing foundation models (FMs) from leading AI companies. It offers a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI practices.

Monitoring your Bedrock models via Dynatrace, you can get cost analysis and forecast estimation via Davis AI, prompt and completion recording, error tracking, performance metrics of your AI services, and more.

Bedrock Observability

Configuration

Follow Ingest OpenTelemetry metrics to see how OpenTelemetry metrics are mapped to Dynatrace metric types.

Create a Dynatrace token

To create a Dynatrace token

  1. In Dynatrace, go to Access Tokens.
    To find Access Tokens, press Ctrl/Cmd+K to search for and select Access Tokens.
  2. In Access Tokens, select Generate new token.
  3. Enter a Token name for your new token.
  4. Give your new token the following permissions:
  5. Search for and select all of the following scopes.
    • Ingest metrics (metrics.ingest)
    • Ingest logs (logs.ingest)
    • Ingest events (events.ingest)
    • Ingest OpenTelemetry traces (openTelemetryTrace.ingest)
    • Read metrics (metrics.read)
    • Write settings (settings.write)
  6. Select Generate token.
  7. Copy the generated token to the clipboard. Store the token in a password manager for future use.

    You can only access your token once upon creation. You can't reveal it afterward.

Instrumentation

Spans

The following attributes are available for GenAI Spans.

Attribute
Type
Description
gen_ai.completion.0.content
string
The full response received from the GenAI model.
gen_ai.completion.0.finish_reason
string
The reason the model stopped generating tokens, corresponding to each generation received.
gen_ai.completion.0.role
string
The role used by the GenAI model.
gen_ai.prompt.0.content
string
The full prompt sent to the GenAI model.
gen_ai.prompt.0.role
string
The role setting for the GenAI request.
gen_ai.request.max_tokens
integer
The maximum number of tokens the model generates for a request.
gen_ai.request.model
string
The name of the GenAI model a request is being made to.
gen_ai.request.temperature
double
The temperature setting for the GenAI request.
gen_ai.response.model
string
The name of the model that generated the response.
gen_ai.system
string
The GenAI product as identified by the client or server instrumentation.
gen_ai.usage.completion_tokens
integer
The number of tokens used in the GenAI response (completion).
gen_ai.usage.prompt_tokens
integer
The number of tokens used in the GenAI input (prompt).
llm.request.type
string
The type of the operation being performed.

Metrics

Metric
Type
Unit
Description
gen_ai.client.operation.duration
histogram
s
The GenAI operation duration.
gen_ai.client.token.usage
histogram
none
The number of input and output tokens used.