Batch OTLP requests

The following configuration example shows how you configure a Collector instance and its native batch processor to queue and batch OTLP requests and improve throughput performance.

Recommended configuration

For optimal performance of your Collector instance, we recommend that you apply this configuration with all setups.

If you use other processors, make sure the batch processor is configured last in your pipeline.

Prerequisites

Demo configuration

receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch/traces:
send_batch_size: 5000
send_batch_max_size: 5000
timeout: 60s
batch/metrics:
send_batch_size: 3000
send_batch_max_size: 3000
timeout: 60s
batch/logs:
send_batch_size: 1800
send_batch_max_size: 2000
timeout: 60s
exporters:
otlphttp:
endpoint: ${env:DT_ENDPOINT}
headers:
Authorization: "Api-Token ${env:DT_API_TOKEN}"
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch/traces]
exporters: [otlphttp]
metrics:
receivers: [otlp]
processors: [batch/metrics]
exporters: [otlphttp]
logs:
receivers: [otlp]
processors: [batch/logs]
exporters: [otlphttp]
Configuration validation

Validate your settings to avoid any configuration issues.

Components

For our configuration, we configure the following components.

Receivers

Under receivers, we specify the standard otlp receiver as active receiver component for our Collector instance.

This is for demonstration purposes. You can specify any other valid receiver here (for example, zipkin).

Processors

Under processors, we specify a different batch processor for each telemetry signal, with the following parameters:

  • send_batch_size: sets the minimum number of entries the processor will queue before sending the whole batch.
  • send_batch_max_size: sets the maximum number of entries a batch may contain. More entries will split the batch into smaller ones.
  • timeout: defines the duration after which the batch will be sent. A batch is sent after the timeout only when the send_batch_size condition is not reached.

With this configuration, the Collector queues telemetry entries in batches, ensuring a good balance between the size and number of export requests to the Dynatrace API.

Batch size values

Not only the number of individual telemetry entries will contribute to the eventual size of a batch, but also the number of associated attributes and their size.

For example, attributes on spans/metrics/logs can make a batch size with the same amount of entries larger, depending on how many/how large the attributes are.

Use the configuration values above as a starting point, but be sure to adapt them to fit your data volume and comply with the Dynatrace API limits for each signal type (traces, metrics, logs) to avoid request rejections.

You can use the ActiveGate self-monitoring metrics to troubleshoot rejected requests. For example, you can use: dsfm:active_gate.rest.request_count filtering for the operation dimension (POST /otlp/v1/<...> for OTLP ingest) and split by response_code. Large requests are rejected with HTTP status code 413.

Another alternative is checking the Collector logs for error messages such as: HTTP Status Code 413, Message=Max Payload size of.

Exporters

Under exporters, we specify the default otlphttp exporter and configure it with our Dynatrace API URL and the required authentication token.

For this purpose, we set the following two environment variables and reference them in the configuration values for endpoint and Authorization.

Service pipelines

Under service, we assemble our receiver and exporter objects into pipelines for traces, metrics, and logs and enable our batch processor by referencing it under processors for each respective pipeline.

Limits and limitations

Data is ingested using the OpenTelemetry protocol (OTLP) via the Dynatrace OTLP APIs and is subject to the API's limits and restrictions. For more information see: