Receive Kafka Metrics

  • How-to guide
  • 3-min read
  • Published Nov 26, 2025

The following configuration example shows how to configure an OpenTelemetry Collector instance to scrape Kafka metrics via the kafkametrics receiver component and ingest them as OTLP requests into Dynatrace.

Prerequisites

To set up this configuration, ensure you have the following:

See Collector Deployment and Collector Configuration on how to set up your Collector with the configuration below.

Demo configuration

receivers:
kafkametrics:
brokers: ["${env:BROKER_ADDRESS}"]
scrapers:
- brokers
- topics
- consumers
processors:
cumulativetodelta:
exporters:
otlphttp:
endpoint: ${env:DT_ENDPOINT}
headers:
Authorization: "Api-Token ${env:DT_API_TOKEN}"
service:
pipelines:
metrics:
receivers: [kafkametrics]
processors: [cumulativetodelta]
exporters: [otlphttp]

Components

For our configuration, we configure the following components.

Receivers

Under receivers, we specify the kafkametrics receiver. We configure it to scrape metrics from the Kafka broker specified in the BROKER_ADDRESS environment variable. The receiver is set to collect metrics on brokers, topics, and consumers.

Processors

The cumulativetodelta processor is required to convert cumulative metrics (as reported by Kafka) into delta aggregation format, for compatibility with the Dynatrace metrics ingestion API.

Exporters

Under exporters, we specify the default otlphttp exporter and configure it with our Dynatrace API URL and the required authentication token.

For this purpose, we set the following two environment variables and reference them in the configuration values for endpoint and Authorization.

Services

Under service, we assemble our receiver, processor, and exporter components into a metrics pipeline. This pipeline:

  1. Scrapes metrics from Kafka.
  2. Converts cumulative metrics to delta metrics.
  3. Exports the data to Dynatrace.

Limits and limitations

Avoid data duplication

To avoid data duplication, make sure that only one OpenTelemetry Collector scrapes a given target (for example, Kafka broker or Prometheus endpoint).

If you run multiple collector replicas, configure each one with a different target. This prevents duplicate metrics and unnecessary ingest costs.

The Target Allocator automatically distributes the Prometheus targets among a pool of Collectors.

Use of the cumulativetodelta processor

Many OpenTelemetry receivers, including the kafkametrics receiver, report cumulative metrics by default. Dynatrace requires delta metrics for proper visualization and analysis.

To convert cumulative metrics to delta metrics, include the cumulativetodelta processor in your metrics pipeline. We recommend using this processor even if you expect some of the metrics to already have delta temporality, as those will be forwarded without any extra processing.

Statefulness

The cumulativetodelta processor calculates delta by remembering the previous value of a metric. For this reason, the calculation is only accurate if the metric is continuously sent to the same instance of the collector. As a result, the cumulativetodelta processor may not work as expected if used in a deployment of multiple collectors. When using this processor, it's best for the data source to send data to a single collector. If you need to scale your collectors while preserving processor state, use stateful scaling

Related tags
Application Observability