The following configuration example shows how you configure a Collector instance and its native batch processor to queue and batch OTLP requests and improve throughput performance.
For optimal performance of your Collector instance, we recommend that you apply this configuration with all setups.
If you use other processors, make sure the batch processor is configured last in your pipeline.
receivers:otlp:protocols:grpc:endpoint: 0.0.0.0:4317http:endpoint: 0.0.0.0:4318processors:batch/traces:send_batch_size: 5000send_batch_max_size: 5000timeout: 60sbatch/metrics:send_batch_size: 3000send_batch_max_size: 3000timeout: 60sbatch/logs:send_batch_size: 1800send_batch_max_size: 2000timeout: 60sexporters:otlphttp:endpoint: ${env:DT_ENDPOINT}headers:Authorization: "Api-Token ${env:DT_API_TOKEN}"service:pipelines:traces:receivers: [otlp]processors: [batch/traces]exporters: [otlphttp]metrics:receivers: [otlp]processors: [batch/metrics]exporters: [otlphttp]logs:receivers: [otlp]processors: [batch/logs]exporters: [otlphttp]
Validate your settings to avoid any configuration issues.
For our configuration, we configure the following components.
Under receivers
, we specify the standard otlp
receiver as active receiver component for our Collector instance.
This is for demonstration purposes. You can specify any other valid receiver here (for example, zipkin
).
Under processors
, we specify a different batch
processor
for each telemetry signal, with the following parameters:
send_batch_size
: sets the minimum number of entries the processor will queue before sending the whole batch.send_batch_max_size
: sets the maximum number of entries a batch may contain. More entries will split the batch into smaller ones.timeout
: defines the duration after which the batch will be sent. A batch is sent after the timeout
only when the send_batch_size
condition is not reached.With this configuration, the Collector queues telemetry entries in batches, ensuring a good balance between the size and number of export requests to the Dynatrace API.
Not only the number of individual telemetry entries will contribute to the eventual size of a batch, but also the number of associated attributes and their size.
For example, attributes on spans/metrics/logs can make a batch size with the same amount of entries larger, depending on how many/how large the attributes are.
Use the configuration values above as a starting point, but be sure to adapt them to fit your data volume and comply with the Dynatrace API limits for each signal type (traces, metrics, logs) to avoid request rejections.
You can use the ActiveGate self-monitoring metrics
to troubleshoot rejected requests. For example, you can use: dsfm:active_gate.rest.request_count
filtering for the operation
dimension (POST /otlp/v1/<...>
for OTLP ingest) and split by response_code
. Large requests are rejected with HTTP status code 413
.
Another alternative is checking the Collector logs for error messages such as: HTTP Status Code 413, Message=Max Payload size of
.
Under exporters
, we specify the default otlphttp
exporter and configure it with our Dynatrace API URL and the required authentication token.
For this purpose, we set the following two environment variables and reference them in the configuration values for endpoint
and Authorization
.
DT_ENDPOINT
contains the base URL of the Dynatrace API endpoint (for example, https://{your-environment-id}.live.dynatrace.com/api/v2/otlp
)DT_API_TOKEN
contains the API tokenUnder service
, we assemble our receiver and exporter objects into pipelines for traces, metrics, and logs and enable our batch processor by referencing it under processors
for each respective pipeline.
Data is ingested using the OpenTelemetry protocol (OTLP) via the Dynatrace OTLP APIs and is subject to the API's limits and restrictions. For more information see: