Ensure success with OpenTelemetry

  • Latest Dynatrace
  • Troubleshooting
  • 1-min read

Successfully implementing OpenTelemetry requires both reliable data export and proper visualization in Dynatrace. This page offers guidance for properly configuring and troubleshooting your OpenTelemetry implementation with Dynatrace.

Metrics for ingest monitoring

Dynatrace provides the following built-in metrics for the ingestion of OpenTelemetry signals. In case of missing data, these can be useful in further analyzing possible ingestion issues.

In Dynatrace Classic, ingest monitoring metrics are prefixed with dsfm: instead of dt.sfm.

Metrics for logs ingest

Latest Dynatrace

NameDescription
dt.sfm.active_gate.event_ingest.event_incoming_countNumber of ingested log records
dt.sfm.active_gate.event_ingest.drop_count Number of dropped log records
dt.sfm.active_gate.event_ingest.event_otlp_sizePayload size of received log requests

Metrics for metrics ingest

Latest Dynatrace

NameDescription
dt.sfm.active_gate.metrics.ingest.otlp.datapoints.acceptedNumber of accepted data points
dt.sfm.active_gate.metrics.ingest.otlp.datapoints.rejectedNumber of rejected data points

Rejected metrics come with a reason dimension, which provides additional details on why a data point was rejected. In Dynatrace, you can filter, sort, and split by that dimension.

A typical reason is when metrics are sent with cumulative aggregation temporality (Dynatrace requires delta temporality), in which case reason indicates UNSUPPORTED_METRIC_TYPE_MONOTONIC_CUMULATIVE_SUM.

Metrics for traces ingest

Latest Dynatrace

NameDescription
dt.sfm.server.spans.receivedNumber of OpenTelemetry spans ingested via the OLTP trace endpoint (ActiveGate or OneAgent) that were successfully received by Dynatrace
dt.sfm.server.spans.persistedNumber of OpenTelemetry spans preserved by Dynatrace; only preserved spans are available for distributed traces analysis
dt.sfm.server.spans.droppedNumber of OpenTelemetry spans that were not preserved by Dynatrace because of the indicated reason (for example, span end time out of range)

Common issues and solutions

Setup issues

Connection issues

Authentication issues

  • Problem: HTTP 401/403 errors in ingestion metrics.
  • Solution: Verify API permissions and endpoint configurations.

Data format issues

  • Problem: High drop rates with format errors.
  • Solution: Validate OpenTelemetry data format compliance and attribute limits.

Configuration issues

  • Problem: No data appears despite successful exports.
  • Solution: Verify endpoint URLs, headers, and protocol configuration.

Ingestion issues

Vertical topology

Signal-specific questions

Specific information about ingesting each signal type is available at

Traces

Metrics

Best practices

Use metric dimensions

Dimensions are used in Dynatrace to help distinguish what is being measured in a specific data point.

In OpenTelemetry, dimensions are called attributes.

For example, if you're measuring the number of requests an endpoint has received, you can use dimensions to split that metric into requests that went through (status code 200) and requests that failed (status code 500).

Your dimensions should be well-annotated (recognizable, readable, understandable), have descriptive names, and provide good information.

Compression

Dynatrace recommends that you enable gzip compression on your OTLP exporters.

The default compression on the OTLP exporter is not set, but it can be configured through the following environment variables:

  • OTEL_EXPORTER_OTLP_COMPRESSION
  • OTEL_EXPORTER_OTLP_TRACES_COMPRESSION
  • OTEL_EXPORTER_OTLP_METRICS_COMPRESSION
  • OTEL_EXPORTER_OTLP_LOGS_COMPRESSION

Acceptable values are none or gzip.

Batching

If you use the OpenTelemetry Collector, we highly recommend that you use a batch processor.

Batching helps better compress the data and reduce the number of outgoing connections required to transmit data to Dynatrace.

See this GitHub readme for more information.

Related tags
Application Observability