The following configuration example shows how to configure an OpenTelemetry Collector instance to scrape Kafka metrics via the kafkametrics receiver component and ingest them as OTLP requests into Dynatrace.
To set up this configuration, ensure you have the following:
BROKER_ADDRESS.
For more information, see the Kafka Apache quickstart guide.See Collector Deployment and Collector Configuration on how to set up your Collector with the configuration below.
receivers:kafkametrics:brokers: ["${env:BROKER_ADDRESS}"]scrapers:- brokers- topics- consumersprocessors:cumulativetodelta:exporters:otlphttp:endpoint: ${env:DT_ENDPOINT}headers:Authorization: "Api-Token ${env:DT_API_TOKEN}"service:pipelines:metrics:receivers: [kafkametrics]processors: [cumulativetodelta]exporters: [otlphttp]
For our configuration, we configure the following components.
Under receivers, we specify the kafkametrics receiver.
We configure it to scrape metrics from the Kafka broker specified in the BROKER_ADDRESS environment variable.
The receiver is set to collect metrics on brokers, topics, and consumers.
The cumulativetodelta processor is required to convert cumulative metrics (as reported by Kafka) into delta aggregation format, for compatibility with the Dynatrace metrics ingestion API.
Under exporters, we specify the default otlphttp exporter and configure it with our Dynatrace API URL and the required authentication token.
For this purpose, we set the following two environment variables and reference them in the configuration values for endpoint and Authorization.
DT_ENDPOINT contains the base URL of the Dynatrace API endpoint (for example, https://{your-environment-id}.live.dynatrace.com/api/v2/otlp).
DT_API_TOKEN contains the API token.
Under service, we assemble our receiver, processor, and exporter components into a metrics pipeline. This pipeline:
To avoid data duplication, make sure that only one OpenTelemetry Collector scrapes a given target (for example, Kafka broker or Prometheus endpoint).
If you run multiple collector replicas, configure each one with a different target. This prevents duplicate metrics and unnecessary ingest costs.
The Target Allocator automatically distributes the Prometheus targets among a pool of Collectors.
cumulativetodelta processorMany OpenTelemetry receivers, including the kafkametrics receiver, report cumulative metrics by default. Dynatrace requires delta metrics for proper visualization and analysis.
To convert cumulative metrics to delta metrics, include the cumulativetodelta processor in your metrics pipeline.
We recommend using this processor even if you expect some of the metrics to already have delta temporality, as those will be forwarded without any extra processing.
The cumulativetodelta processor calculates delta by remembering the previous value of a metric. For this reason, the calculation is only accurate if the metric is continuously sent to the same instance of the collector. As a result, the cumulativetodelta processor may not work as expected if used in a deployment of multiple collectors. When using this processor, it's best for the data source to send data to a single collector. If you need to scale your collectors while preserving processor state, use stateful scaling