The following configuration example shows how you configure a Collector instance and its native memory limiter processor to guarantee memory allocation keeps within the specified parameters.
For optimal memory usage with your Collector instance, we recommend that you apply this configuration with most containerized setups. See the section on deployment considerations for more information.
See Collector Deployment and Collector Configuration on how to set up your Collector with the configuration below.
receivers:otlp:protocols:grpc:endpoint: 0.0.0.0:4317http:endpoint: 0.0.0.0:4318processors:memory_limiter:check_interval: 1slimit_percentage: 90spike_limit_percentage: 20exporters:otlphttp:endpoint: ${env:DT_ENDPOINT}headers:Authorization: "Api-Token ${env:DT_API_TOKEN}"service:pipelines:traces:receivers: [otlp]processors: [memory_limiter]exporters: [otlphttp]metrics:receivers: [otlp]processors: [memory_limiter]exporters: [otlphttp]logs:receivers: [otlp]processors: [memory_limiter]exporters: [otlphttp]
Validate your settings to avoid any configuration issues.
For our configuration, we configure the following components.
Under receivers, we specify the standard otlp receiver as active receiver component for our Collector instance.
This is mainly for demonstration purposes. You can specify any other valid receiver here (for example, zipkin).
Under processors, we specify the memory_limiter processor with the following parameters:
check_interval configured to check the memory status every secondlimit_percentage configured to allow a maximum memory allocation of 90 percentspike_limit_percentage configured to allow a maximum spike memory usage of 20 percentWith this configuration, the Collector checks the memory allocation every second and starts to apply pressure using separate mechanisms after the following limits are reached:
limit_percentage - spike_limit_percentage): After this limit is
reached, the processor rejects payloads until memory usage is under the limit.
It is up to the receiver upstream of the processor to send the proper
rejection messages.limit_percentage): After this limit is reached, the processor
will force garbage Collection until memory usage is under the limit. Data will
continue to be rejected until usage is under the soft limit.In addition to the memory limiter processor, we highly recommend you set the
GOMEMLIMIT environment variable to a value 80% of the hard limit. Note that
GOMEMLIMIT requires an absolute value in bytes to be set. For example, you
could set GOMEMLIMIT=1024MiB to start increasing the frequency of garbage
collection cycles once the Collector reaches 1024 MiB of memory used on the Go
VM heap. For more information, see the Go package
documentation describing
how the environment variable works.
In containerized environments, or other places where the host environment sets
the Collector's maximum allowed memory, we recommend you use the
limit_percentage and spike_limit_percentage options.
For deployments on virtual machines or bare metal where the Collector is not
given an explicit memory quota, we instead recommend you use the limit_mib and
spike_limit_mib options.
Under exporters, we specify the default otlphttp exporter and configure it with our Dynatrace API URL and the required authentication token.
For this purpose, we set the following two environment variables and reference them in the configuration values for endpoint and Authorization.
DT_ENDPOINT contains the base URL of the Dynatrace API endpoint (for example, https://{your-environment-id}.live.dynatrace.com/api/v2/otlp)DT_API_TOKEN contains the API tokenUnder service, we assemble our receiver and exporter objects into pipelines for traces, metrics, and logs and enable our memory limiter processor by referencing it under processors for each respective pipeline.
Data is ingested using the OpenTelemetry protocol (OTLP) via the Dynatrace OTLP APIs and is subject to the API's limits and restrictions. For more information see: