This article describes the Processing stage in OpenPipeline and the available processors. In the Processing stage, you can prepare data for analysis, extraction, forwarding, and persistence by parsing values into fields, transforming the schema, dropping data records, editing fields, and masking sensitive data.
Get familiar with OpenPipeline concepts of stage and processors before you begin. To learn more see Processing in OpenPipeline.
The processors in the stage are:
The processors apply to all records matching the condition.
The DQL processor processes a subset of DQL and formats the result into string, number, bool, duration, timestamp, and respective arrays of those.
This processor uses DQL processing statements, providing high flexibility. By combining different DPL (Dynatrace Pattern Language) and DQL (Dynatrace Query Language) commands, you can use this processor for a variety of use cases, including:
fieldsAdd, fieldsRename, fieldsRemove)Use dedicated processors where applicable (Add fields, Remove fields, and Rename fields), as you can instantly understand their role in the processor list. Moreover, it's easier to use them when you're less familiar with DQL. Utilize the DQL processor for complex transformations.
The following table describes the parameters available in the DQL processor.
| Parameter | Description | Required |
|---|---|---|
| Name | Name of the processor. | Required |
| Matching condition | DQL statement that identifies the records the processor applies to. | Required |
| DQL processor definition | DQL statement to apply to the records. The maximum length is 8,192 UTF-8 encoded bytes. | Required |
| Sample data | Sample data to test your configuration. | Recommended |
The following DQL definition extracts the values of two temporary fields, total and failed, out of a JSON file to calculate the failure percentage and store it in a new failed.percentage attribute. Finally, it removes the temporary fields used for the calculation from the JSON file.
parse content, "LD 'total: ' INT:total '; failed: ' INT:failed"| fieldsAdd failed.percentage = 100.0 * failed / total| fieldsRemove total, failed
Unprocessed
{"content": "Lorem ipsum total: 1000; failed: 255"}
Processed
{"content": "Lorem ipsum total: 1000; failed: 255","failed.percentage": 25.5}
The following example parses out the username of an email address and uses the replaceString function to replace it with a static value.
parse content, "LD 'email: ' LD:user '@'"| fieldsAdd content = replaceString(content, user, "xxx")| fieldsRemove user
Unprocessed
{"content" : "Lorem ipsum client_ip: 192.168.1.12 email: alex.example@example.com card number: 4012888888881881 server_ip: 215.131.189.194 dolor sit amet"}
Processed
{"content": "Lorem ipsum client_ip: 192.168.1.12 email: xxx@example.com card number: 4012888888881881 server_ip: 215.131.189.194 dolor sit amet"}
The following DQL definition adds two new top-level fields, content.length and content.words, via the fieldsAdd command. The fields store the length and number of words of the content JSON field. The values adapt to the content of the record.
The DQL definition instructs OpenPipeline to count the string length and the array size and to add the corresponding value to the dedicated fields.
fieldsAdd content.length = stringLength(content), content.words = arraySize(splitByPattern(content, "' '"))
Unprocessed
{"content": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis."}
Processed
{"content": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis.","content.length": "62","content.words": "9"}
The Add fields processor adds fields based on the specified field name and static value. This processor doesn't leverage DQL processing statements and doesn't support dynamic values.
To add fields with dynamic values, use the DQL processor instead.
The following table describes the parameters available in the Add fields processor.
| Parameter | Description | Required |
|---|---|---|
| Name | Name of the processor. | Required |
| Matching condition | DQL statement that identifies the records the processor applies to. | Required |
| Add fields | Name of the field to add and its static value. | Required |
| Sample data | Sample data to test your configuration. | Recommended |
The following example adds two new top-level fields: company.team stores the team name and company.branch stores the branch location. It also adds their static values: sales-team and New York Sales Office.
The processor filters records that match the following condition:
filter contains(audit.identity, "@sales.example.com") AND geo.city.name == "New York"
The processor configuration specifies the fields to add and the static values.
| Name | Value |
|---|---|
company.team | sales-team |
company.branch | New York Sales Office |
Unprocessed
{"timestamp": "2026-03-18T08:30:00Z","geo.city.name": "New York","audit.identity": "alex.example@sales.example.com","content": "Employee accessed the application from New York.","logLevel": "INFO","application.name": "WebApp"}
Processed
{"timestamp": "2026-03-18T08:30:00Z","geo.city.name": "New York","audit.identity": "alex.example@sales.example.com","content": "Employee accessed the application from New York.","logLevel": "INFO","application.name": "WebApp","company.team": "sales-team","company.branch": "New York Sales Office"}
The Remove fields processor removes fields from the record. This processor doesn't leverage DQL processing statements.
The following table describes the parameters available in the Remove fields processor.
| Parameter | Description | Required |
|---|---|---|
| Name | Name of the processor. | Required |
| Matching condition | DQL statement that identifies the records the processor applies to. | Required |
| Remove fields | Name of the field to remove. | Required |
| Sample data | Sample data to test your configuration. | Recommended |
The following example removes the debugging.temp field. The processor defines the name of the field to remove from the record.
The processor filters records that match the following condition:
filter exists(debugging-temp)
Unprocessed
{"timestamp": "2023-01-01T10:00:00Z","content": "Request completed","debugging-temp": "Response time: 120ms"}
Processed
{"timestamp": "2023-01-01T10:00:00Z","content": "Request completed"}
The Rename fields processor changes field names. This processor doesn't leverage DQL processing statements.
The following table describes the parameters available in the Rename fields processor.
| Parameter | Description | Required |
|---|---|---|
| Name | Name of the processor. | Required |
| Matching condition | DQL statement that identifies the records the processor applies to. | Required |
| Rename fields | Name of the field to rename (old name) and the new name. | Required |
| Sample data | Sample data to test your configuration. | Recommended |
The following example renames the OpenTelemetry fields trace_id and span_id to Dynatrace Semantic Dictionary syntax. Spans ingested via OpenTelemetry endpoints are routed to the pipeline. The processor then filters records that match the following condition:
status.code = ERROR
The processor defines the name of the field to rename and the new name.
| Old name | New name |
|---|---|
| trace_id | trace.id |
| span_id | span.id |
Unprocessed
{"trace_id": "a1b2c3d4e5f6","span_id": "1234abcd"}
Processed
{"trace.id": "a1b2c3d4e5f6","span.id": "1234abcd",}
The Drop records processor drops records that match a condition, before they're processed. Records aren't retained. This processor doesn't leverage DQL processing statements.
To drop a record after it's processed, use the No storage assignment processor in the Storage stage.
The following table describes the fields available in the Drop records processor.
| Parameter | Description | Required |
|---|---|---|
| Name | Name of the processor. | Required |
| Matching condition | DQL statement that identifies the records the processor applies to. | Required |
| Sample data | Sample data to test your configuration. | Recommended |
The Technology bundle applies pre-built parsers to logs from supported technologies. Use it to automatically enrich logs and improve structure and metadata for downstream analysis.
Technology bundles are also automatically applied to ingest sources during log pre-processing.
The Technology bundle is available only for supported technologies within the OpenPipeline log configuration scope.
| Ingest sources | Processor |
|---|---|
Amazon Data Firehose |
|
Data Acquisition - AWS Data Firehose |
|
Log ingestion API |
|
OneAgent |
|
OpenTelemetry | None |
The following table describes the parameters available in the Technology bundle processor.
| Parameter | Description | Required |
|---|---|---|
| Name | Name of the processor. | Default |
| Matching condition | DQL statement that identifies the records the technology bundle applies to. Available as predefined or custom. | Required |
| Processors | Predefined list of parsers. Each parser is predefined and contains a matching condition and a processing statement available as view-only. | Default |
| Sample data | Editable sample data to test your configuration. | Default |