Transform and filter
The following configuration example shows how to configure a Collector instance to transform and manipulate OTLP requests, before forwarding them to Dynatrace.
Using the processors shown in this example (filter
and transform
), it is possible to streamline requests before sending them to Dynatrace and omit data possibly irrelevant to your use case, and to reduce billing costs.
Prerequisites
- One of the following Collector distributions with the transform and filter processors
- The Dynatrace Collector
- OpenTelemetry Contrib
- A custom Builder version
- The API URL of your Dynatrace environment
- An API token with the relevant access scope
Demo configuration
receivers:otlp:protocols:grpc:endpoint: 0.0.0.0:4317http:endpoint: 0.0.0.0:4318processors:transform:trace_statements:- context: resourcestatements:# Only keep a certain set of resource attributes- keep_matching_keys(attributes, "^(aaa|bbb|ccc).*")- context: spanstatements:# Only keep a certain set of span attributes- keep_matching_keys(attributes, "(^xyz.pqr$)|(^(aaa|bbb|ccc).*)")# Set a static key- set(attributes["svc.marker"], "purchasing")# Delete a specific key- delete_key(attributes, "message")# Rewrite a key- set(attributes["purchase.id"], ConvertCase(attributes["purchase.id"], "upper"))# Apply regex replacement- replace_pattern(name, "^.*(DataSubmission-\d+).*$", "$$1")metric_statements:- context: metricstatements:# Rename all metrics containing '_bad' suffix in their name with `_invalid`- replace_pattern(name, "(.*)_bad$", "$${1}_invalid")filter:error_mode: ignoretraces:span:# Filter spans with resource attributes matching the provided regular expression- IsMatch(resource.attributes["k8s.pod.name"], "^my-pod-name.*")metrics:metric:# Filter metrics which contain at least one data point with a "bad.metric" attribute- 'HasAttrKeyOnDatapoint("bad.metric")'logs:log_record:# Filter logs with resource attributes matching the configured names- resource.attributes["service.name"] == "service1"- resource.attributes["service.name"] == "service2"exporters:otlphttp:endpoint: ${env:DT_ENDPOINT}headers:Authorization: "Api-Token ${env:DT_API_TOKEN}"service:pipelines:traces:receivers: [otlp]processors: [filter,transform]exporters: [otlphttp]metrics:receivers: [otlp]processors: [filter]exporters: [otlphttp]logs:receivers: [otlp]processors: [filter]exporters: [otlphttp]
Validate your settings to avoid any configuration issues.
Components
For our configuration, we configure the following components.
Receiver
Under receivers
, we specify the standard otlp
receiver as active receiver component for our Collector instance.
This is for demonstration purposes. You can specify any other valid receiver here (for example, zipkin
).
Processor
Transform
Under processors
, we specify the transform
processor with a set of different attribute modification statements. context
indicates the scope to which the statements should apply (here, resource
for resource attributes, span
for span attributes, and metric
for metrics).
See the OpenTelemetry documentation of the transform processor for more details on the individual configuration options.
The sample configuration above uses the following statements:
Statement
Description
Evaluates the attribute key names and only keeps those, whose names match the given regular expressions of ^(aaa|bbb|ccc).*
for resource attributes and (^xyz.pqr$)|(^(aaa|bbb|ccc).*)
for span attributes.
Adds/changes the following two span attributes:
svc.marker
, with the static valuepurchasing
purchase.id
, coverting its value to uppercase, usingConvertCase
Deletes attributes named message
.
Matches a string against a given regular expression and perform a string substitution on all matching entries.
In our example, we first use it for traces to match the name against the regular expression ^.*(DataSubmission-\d+).*$
and replace its content with the first capture group ($$1
) of our expression. This essentially means, we search strings for DataSubmission
suffixed by a number and — if found — only keep the value of the found match.
We also use the function for metrics with the regular expression (.*)_bad$
, to change the _bad
suffix to _invalid
.
Filter
In addition, we also configure an instance of the filter
processor, to filter signal based on the following criteria:
Signal
Description
Traces
Uses IsMatch
to match the name of resource attributes against the regular expression ^my-pod-name.*
, dropping spans with attributes whose names start with my-pod-name
.
Metrics
Uses HasAttrKeyOnDatapoint
to evalute if datapoints have attributes with the name bad.metric
.
Logs
Uses a strict string match of the resource attribute service.name
against the strings service1
and service2
.
See the OpenTelemetry documentation of the filter processor for more details on the individual configuration options.
Exporter
Under exporters
, we specify the default otlphttp
exporter and configure it with our Dynatrace API URL and the required authentication token.
For this purpose, we set the following two environment variables and reference them in the configuration values for endpoint
and Authorization
.
DT_ENDPOINT
contains the base URL of the Dynatrace API endpoint (for example,https://{your-environment-id}.live.dynatrace.com/api/v2/otlp
)DT_API_TOKEN
contains the API token
Service pipeline
Under service
, we assemble our receiver, processor, and exporter objects into a traces pipeline, which accepts OTLP traces on the configured endpoints and transforms trace attributes according to the configured rules, before forwarding everything to Dynatrace using the exporter.
Limits and limitations
Data is ingested using the OpenTelemetry protocol (OTLP) via the Dynatrace OTLP APIs and is subject to the API's limits and restrictions. For more information see: