Collector Configuration
To successfully configure your Collector instance, you need to configure each component (receiver, optional processor, and exporter) individually in a YAML file and enable them via pipelines.
Configuration example
Here is an example YAML file for a very basic Collector configuration that can be used to export OpenTelemetry traces, metrics, and logs to Dynatrace.
receivers:otlp:protocols:grpc:endpoint: 0.0.0.0:4317http:endpoint: 0.0.0.0:4318processors:cumulativetodelta:exporters:otlphttp:endpoint: "${env:DT_ENDPOINT}"headers:Authorization: "Api-Token ${env:DT_API_TOKEN}"service:pipelines:traces:receivers: [otlp]processors: []exporters: [otlphttp]metrics:receivers: [otlp]processors: [cumulativetodelta]exporters: [otlphttp]logs:receivers: [otlp]processors: []exporters: [otlphttp]
In this YAML file, we configure the following components:
-
An OTLP receiver (
otlp
) that can receive data via gRPC and HTTP -
A processor to convert any metrics with cumulative temporality to delta temporality (see Delta metrics for more details)
-
An OTLP HTTP exporter (
otlphttp
) configured with the Dynatrace endpoint and API tokenIn the example configuration above, the Dynatrace token needs to have the Ingest OpenTelemetry traces (
openTelemetryTrace.ingest
), the Ingest metrics (metrics.ingest
), and the Ingest logs (logs.ingest
) permissions.
The section on API tokens provides more information on how to obtain and configure your API token.
Within the service section, you define each component separately.
-
Extensions can be enabled in their own section, while receivers, processors, and exporters are grouped under a pipeline section.
-
Pipelines can be of type traces, metrics, or logs.
-
Each receiver/processor/exporter can be used in more than one pipeline. For processors referenced in multiple pipelines, each pipeline gets a separate instance of the processors. This contrasts with receivers/exporters referenced in multiple pipelines, where only one instance of a receiver/exporter is used for all pipelines. Also, note that the order of processors dictates the order in which data is processed.
-
You can also define the same components more than once. For example, you can have two different receivers or even two or more distinct parts of the pipeline.
-
Even if a component is properly configured in its section, it will not be enabled unless it's also defined in the service section.
Validate the configuration
It is important to ensure the used Collector configuration is syntactically and semantically correct. For example, YAML uses spaces (not tabs) for indentation, to define the document hierarchy, and it is necessary to use the right level of indentation for each section and component. Collector provides the built-in validate
command to verify the configuration and its components and services are properly configured.
dynatrace-otel-collector validate --config=[PATH_TO_YOUR_CONFIGURATION_FILE]
If you run a container instance of the Collector, you can also use the following Docker command to run the validation directly from your container.
docker run -v $(pwd):$(pwd) -w $(pwd) ghcr.io/dynatrace/dynatrace-otel-collector/dynatrace-otel-collector:latest validate --config=[YOUR_CONFIGURATION_FILE]
Delta metrics
Dynatrace requires metrics data to be sent with delta temporality and not cumulative temporality.
If your application doesn't allow you to configure delta temporality, you can use the cumulativetodelta
processor to have your Collector instance adjust cumulative values to delta values. The configuration example above shows how to configure and reference the processor in your Collector configuration.
Chained and load-balanced Collectors
When you use more than one Collector instance, it's important to maintain stable value propagation across all instances.
This is particularly important when you send OTLP requests across different Collector instances (for example, load balancing), as each Collector instance keeps track of its own delta offset, which may break the data reported to the Dynatrace backend.
In such scenarios, we recommend routing your OTLP requests through a single, outbound Collector instance that forwards the data to the Dynatrace backend and takes care of the delta conversion. The other Collector instances should use a cumulative aggregation, to ensure stable and consistent value propagation.
API tokens
OTLP requests to ActiveGate require authentication information provided by Dynatrace API tokens.
The previous configuration sample shows how to configure the Authorization
header for the exporter.
exporters:otlphttp:headers:Authorization: "Api-Token ${env:DT_API_TOKEN}"
While you could hardcode the API token, we recommend using an external data source, such as environment variables, for better security.
In the example here, we set the API token using the environment variable DT_API_TOKEN
from a Kubernetes secret and reference the variable with the ${env:}
notation.