Forward OpenTelemetry data with Kafka exporter

  • How-to guide
  • 3-min read
  • Published Nov 05, 2025

The following configuration example shows how you configure a Collector instance to export OTLP data to Kafka.

Prerequisites

Demo configuration

Here is an example YAML file for a basic Collector configuration that can be used to export OpenTelemetry traces, metrics, and logs to Kafka.

processors:
memory_limiter:
check_interval: 1s
limit_percentage: 100
batch:
send_batch_size: 500
timeout: 30s
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
exporters:
kafka:
brokers: ["${env:BROKER_ADDRESS}"]
tls:
insecure: true # Only necessary if your Kafka server does not provide a certificate that's trusted by the Collector.
traces:
metrics:
logs:
service:
pipelines:
traces:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [kafka]
metrics:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [kafka]
logs:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [kafka]

For this configuration to work, you need to set the BROKER_ADDRESS environment variable. The value is specific to your Kafka server.

Configuration validation

Validate your settings to avoid any configuration issues.

Components

For our configuration, we configure certain components as described in the sections below.

Receivers

Under receivers, we specify otlp as the active receiver component for our deployment. This is required to accept OTLP data.

Processors

Under processors, we specify:

Exporters

Under exporters, we specify the kafka exporter to forward data to the Kafka server.

Service pipeline

Under service, we assemble our receiver, processors, and exporter objects into service pipelines, which will perform these steps:

  1. Accept OTLP requests on the configured ports.
  2. Use the memory_limit processor to make sure that the Collector doesn't run out of memory.
  3. Batch data using the batch processor.
  4. Export data to Kafka server.
Related tags
Application Observability