Processing stage in OpenPipeline

  • Latest Dynatrace
  • Explanation
  • 6-min read
  • Published Apr 02, 2026

This article describes the Processing stage in OpenPipeline and the available processors. In the Processing stage, you can prepare data for analysis, extraction, forwarding, and persistence by parsing values into fields, transforming the schema, dropping data records, editing fields, and masking sensitive data.

Get familiar with OpenPipeline concepts of stage and processors before you begin. To learn more see Processing in OpenPipeline.

Processors

The processors in the stage are:

  • DQL
  • Add fields
  • Remove fields
  • Rename fields
  • Drop record
  • Logs Technology bundle processor

The processors apply to all records matching the condition.

DQL

The DQL processor processes a subset of DQL and formats the result into string, number, bool, duration, timestamp, and respective arrays of those.

This processor uses DQL processing statements, providing high flexibility. By combining different DPL (Dynatrace Pattern Language) and DQL (Dynatrace Query Language) commands, you can use this processor for a variety of use cases, including:

  • Calculations
  • Data masking
  • Field editing (fieldsAdd, fieldsRename, fieldsRemove)

Use dedicated processors where applicable (Add fields, Remove fields, and Rename fields), as you can instantly understand their role in the processor list. Moreover, it's easier to use them when you're less familiar with DQL. Utilize the DQL processor for complex transformations.

Parameters

The following table describes the parameters available in the DQL processor.

ParameterDescriptionRequired
NameName of the processor.Required
Matching conditionDQL statement that identifies the records the processor applies to.Required
DQL processor definitionDQL statement to apply to the records. The maximum length is 8,192 UTF-8 encoded bytes.Required
Sample dataSample data to test your configuration.Recommended

Example: Perform calculations

The following DQL definition extracts the values of two temporary fields, total and failed, out of a JSON file to calculate the failure percentage and store it in a new failed.percentage attribute. Finally, it removes the temporary fields used for the calculation from the JSON file.

parse content, "LD 'total: ' INT:total '; failed: ' INT:failed"
| fieldsAdd failed.percentage = 100.0 * failed / total
| fieldsRemove total, failed

Unprocessed

{
"content": "Lorem ipsum total: 1000; failed: 255"
}

Processed

{
"content": "Lorem ipsum total: 1000; failed: 255",
"failed.percentage": 25.5
}

Example: Mask data

The following example parses out the username of an email address and uses the replaceString function to replace it with a static value.

parse content, "LD 'email: ' LD:user '@'"
| fieldsAdd content = replaceString(content, user, "xxx")
| fieldsRemove user

Unprocessed

{
"content" : "Lorem ipsum client_ip: 192.168.1.12 email: alex.example@example.com card number: 4012888888881881 server_ip: 215.131.189.194 dolor sit amet"
}

Processed

{
"content": "Lorem ipsum client_ip: 192.168.1.12 email: xxx@example.com card number: 4012888888881881 server_ip: 215.131.189.194 dolor sit amet"
}

Example: Add fields with dynamic values

The following DQL definition adds two new top-level fields, content.length and content.words, via the fieldsAdd command. The fields store the length and number of words of the content JSON field. The values adapt to the content of the record.

The DQL definition instructs OpenPipeline to count the string length and the array size and to add the corresponding value to the dedicated fields.

fieldsAdd content.length = stringLength(content), content.words = arraySize(splitByPattern(content, "' '"))

Unprocessed

{
"content": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis."
}

Processed

{
"content": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis.",
"content.length": "62",
"content.words": "9"
}

Add fields

The Add fields processor adds fields based on the specified field name and static value. This processor doesn't leverage DQL processing statements and doesn't support dynamic values.

To add fields with dynamic values, use the DQL processor instead.

Parameters

The following table describes the parameters available in the Add fields processor.

ParameterDescriptionRequired
NameName of the processor.Required
Matching conditionDQL statement that identifies the records the processor applies to.Required
Add fieldsName of the field to add and its static value.Required
Sample dataSample data to test your configuration.Recommended

Example

The following example adds two new top-level fields: company.team stores the team name and company.branch stores the branch location. It also adds their static values: sales-team and New York Sales Office.

The processor filters records that match the following condition:

filter contains(audit.identity, "@sales.example.com") AND geo.city.name == "New York"

The processor configuration specifies the fields to add and the static values.

NameValue
company.teamsales-team
company.branchNew York Sales Office

Unprocessed

{
"timestamp": "2026-03-18T08:30:00Z",
"geo.city.name": "New York",
"audit.identity": "alex.example@sales.example.com",
"content": "Employee accessed the application from New York.",
"logLevel": "INFO",
"application.name": "WebApp"
}

Processed

{
"timestamp": "2026-03-18T08:30:00Z",
"geo.city.name": "New York",
"audit.identity": "alex.example@sales.example.com",
"content": "Employee accessed the application from New York.",
"logLevel": "INFO",
"application.name": "WebApp",
"company.team": "sales-team",
"company.branch": "New York Sales Office"
}

Remove fields

The Remove fields processor removes fields from the record. This processor doesn't leverage DQL processing statements.

Parameters

The following table describes the parameters available in the Remove fields processor.

ParameterDescriptionRequired
NameName of the processor.Required
Matching conditionDQL statement that identifies the records the processor applies to.Required
Remove fieldsName of the field to remove.Required
Sample dataSample data to test your configuration.Recommended

Example

The following example removes the debugging.temp field. The processor defines the name of the field to remove from the record.

The processor filters records that match the following condition:

filter exists(debugging-temp)

Unprocessed

{
"timestamp": "2023-01-01T10:00:00Z",
"content": "Request completed",
"debugging-temp": "Response time: 120ms"
}

Processed

{
"timestamp": "2023-01-01T10:00:00Z",
"content": "Request completed"
}

Rename fields

The Rename fields processor changes field names. This processor doesn't leverage DQL processing statements.

Parameters

The following table describes the parameters available in the Rename fields processor.

ParameterDescriptionRequired
NameName of the processor.Required
Matching conditionDQL statement that identifies the records the processor applies to.Required
Rename fieldsName of the field to rename (old name) and the new name.Required
Sample dataSample data to test your configuration.Recommended

Example

The following example renames the OpenTelemetry fields trace_id and span_id to Dynatrace Semantic Dictionary syntax. Spans ingested via OpenTelemetry endpoints are routed to the pipeline. The processor then filters records that match the following condition:

status.code = ERROR

The processor defines the name of the field to rename and the new name.

Old nameNew name
trace_idtrace.id
span_idspan.id

Unprocessed

{
"trace_id": "a1b2c3d4e5f6",
"span_id": "1234abcd"
}

Processed

{
"trace.id": "a1b2c3d4e5f6",
"span.id": "1234abcd",
}

Drop records

The Drop records processor drops records that match a condition, before they're processed. Records aren't retained. This processor doesn't leverage DQL processing statements.

To drop a record after it's processed, use the No storage assignment processor in the Storage stage.

Parameters

The following table describes the fields available in the Drop records processor.

ParameterDescriptionRequired
NameName of the processor.Required
Matching conditionDQL statement that identifies the records the processor applies to.Required
Sample dataSample data to test your configuration.Recommended

Technology bundle Logs

The Technology bundle applies pre-built parsers to logs from supported technologies. Use it to automatically enrich logs and improve structure and metadata for downstream analysis.

  • The matching condition and sample data of the technology bundle are automatically provided and can be edited after you select the technology.
  • The default parsers use predefined processing options, including DQL processing statements. Parser-level matching conditions and processing statements are view-only and can't be edited.
  • You can have multiple Technology bundle processors for the same technology.

Technology bundles are also automatically applied to ingest sources during log pre-processing.

Availability

The Technology bundle is available only for supported technologies within the OpenPipeline log configuration scope.

Supported technologies
Ingest sourcesProcessor

Amazon Data Firehose

  • AWS App Runner
  • AWS Cloud Trail
  • Amazon Relational Database Service (RDS)
  • Amazon Simple Notification Service (SNS)
  • AWS Common
  • Amazon Aurora
  • Amazon API Gateway
  • AWS Lambda
  • Amazon Virtual Private Cloud Flow Default
  • AWS Transit Gateway
  • AWS WAF
  • Amazon Cloudfront

Data Acquisition - AWS Data Firehose

  • AWS Lambda
  • AWS App Runner
  • Amazon Relational Database Service (RDS)
  • Amazon Aurora
  • Amazon Simple Notification Service (SNS)
  • Amazon API Gateway

Log ingestion API

  • Amazon API Gateway
  • Amazon Aurora
  • Amazon CloudFront
  • Amazon Virtual Private Cloud Flow Default
  • AWS App Runner
  • AWS Cloud Trail
  • AWS Common
  • AWS Lambda
  • Amazon Relational Database Service (RDS)
  • Amazon Simple Notification Service (SNS)
  • AWS Transit Gateway
  • AWS WAF
  • Azure Services
  • Azure Entra ID Audit Logs

OneAgent

  • Elasticsearch
  • Cassandra
  • PostgreSQL
  • Redis
  • NodeJS
  • PHP
  • Java
  • Python
  • .NET
  • Ruby
  • Go
  • RabbitMQ
  • Apache Kafka
  • Nginx
  • HAProxy
  • Apache Tomcat
  • Apache HTTP
  • JBoss
  • Microsoft IIS
  • Syslog

OpenTelemetry

None

Parameters

The following table describes the parameters available in the Technology bundle processor.

ParameterDescriptionRequired
NameName of the processor.Default
Matching conditionDQL statement that identifies the records the technology bundle applies to. Available as predefined or custom.Required
ProcessorsPredefined list of parsers. Each parser is predefined and contains a matching condition and a processing statement available as view-only.Default
Sample dataEditable sample data to test your configuration.Default

Use cases

  • Transform records.
  • Mask sensitive information.
  • Edit fields—add, remove, or rename fields.
  • Drop unnecessary records.
Related tags
Dynatrace Platform