Batch OTLP requests

The following configuration example shows how you configure a Collector instance and its native batch processor to queue and batch OTLP requests and improve throughput performance.

Recommended configuration

For optimal performance of your Collector instance, we recommend that you apply this configuration with all setups.

If you use other processors, make sure the batch processor is configured last in your pipeline.

Prerequisites

Demo configuration

receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch:
send_batch_max_size: 1000
timeout: 30s
send_batch_size : 800
exporters:
otlphttp:
endpoint: ${env:DT_ENDPOINT}
headers:
Authorization: "Api-Token ${env:DT_API_TOKEN}"
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlphttp]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [otlphttp]
logs:
receivers: [otlp]
processors: [batch]
exporters: [otlphttp]
Configuration validation

Validate your settings to avoid any configuration issues.

Components

For our configuration, we configure the following components.

Receivers

Under receivers, we specify the standard otlp receiver as active receiver component for our Collector instance.

This is for demonstration purposes. You can specify any other valid receiver here (for example, zipkin).

Processors

Under processors, we specify the batch processor with the following parameters:

  • send_batch_max_size configured for a maximum of 1,000 entries per batch
  • timeout configured to always send data after 30 seconds, regardless of any other batch limits
  • send_batch_size configured to always send data after 800 entries, regardless of any other batch limits

With this configuration, the Collector queues telemetry entries in batches and sends a batch either after 30 seconds have passed or at least 800 entries are queued.

Configuration values

We recommend the suggested configuration values above for most setups, but the optimal batch size may depend on your particular use case.

Be sure to configure the batch size to stay within the limits of the individual signal types (traces, metrics, logs) or Dynatrace may reject requests.

Exporters

Under exporters, we specify the default otlphttp exporter and configure it with our Dynatrace API URL and the required authentication token.

For this purpose, we set the following two environment variables and reference them in the configuration values for endpoint and Authorization.

Service pipelines

Under service, we assemble our receiver and exporter objects into pipelines for traces, metrics, and logs and enable our batch processor by referencing it under processors for each respective pipeline.

Limits and limitations

Data is ingested using the OpenTelemetry protocol (OTLP) via the Dynatrace OTLP APIs and is subject to the API's limits and restrictions. For more information see: