This article explains how you can manually migrate existing classic processing rules for logs and business events to OpenPipeline. It considers permission management and routing so that teams can get started with processing in OpenPipeline independently.
The following table summarizes the key technical differences of processing logs via the log classic pipeline and OpenPipeline.
| Technical point | Log classic pipeline | OpenPipeline |
|---|---|---|
Data type support |
|
|
Content field limit | 512 kB | 10 MB |
Field name case sensitivity | Case-insensitive | Case-sensitive1 |
Connect log data to traces | Built-in rules | Automatic2 |
Technology parsers | Built-in rules | Preset bundles with broader technology support |
Query language | LQL, DQL | DQL3 |
Metric dimension naming | No | Yes |
Metric-key | Mandatory | Optional |
When you ingest logs via Log Monitoring API v2 - POST ingest logs, field names are automatically converted to lowercase after data is routed to the Classic pipeline.
The enrichment is done automatically, without requiring any user interaction.
You'll identify the data sets processed by classic processing rules and the users responsible for this data. Based on this information, you'll create
Finally, you can disable classic processing rules.
Dynatrace version 1.295+
Dynatrace SaaS environment powered by Grail and AppEngine
DPS license with log or business event capabilities
Permissions:
openpipeline:configurations:writesettings:objects:adminCollection of processors executed in an ordered sequence of stages to structure, separate, and store data.
Pre-formatted processing instruction that focuses either on modifying or extracting data. It contains a configurable matcher and processing definition.
Set of team-managed pipelines to which a shared configuration applies. The shared configuration can restrict or mandate processing, enabling centralized processing across multiple pipelines.
Directing data to a pipeline, either based on matching conditions (dynamic) or by explicit pipeline selection (static).
To migrate classic processing rules to OpenPipeline
In the OpenPipeline configuration scope, go to Pipelines > Pipeline to create a new custom pipeline.
To convert the classic processing rules to OpenPipeline processors and stages,
Choose which processors to adopt. Each processor has its own configuration.
While the DQL processor can replace most classic use cases, prefer specialized processors where available, as they provide a more efficient approach to processing.
Define the processor.
Make sure to test configurations using sample data.
Once you're satisfied with the result, select Save.
Your new pipeline is added to the table. The pipeline will take effect only once data is routed to it.
When the route is set to active, data that matches the condition gets routed to the pipeline you created and is processed accordingly, instead of the classic pipeline. You can verify processing results in
Notebooks.
Keep classic rules enabled until all data is reliably routed and processed by OpenPipeline.
settings:objects:read and settings:objects:write permissions scoped to OpenPipeline schemas for log and business event pipelines.You can start simple and create one policy per configuration scope (logs or business events).
Example:
ALLOW settings:objects:write WHERE settings:schemaId IN ("builtin:openpipeline.user.logs.pipelines", "builtin:openpipeline.business.events.pipelines")
Platform administrators typically retain settings:objects:admin, which grants access to all schemas, including routing.
For more information on OpenPipeline schemas, see the Settings API for each configuration scope:
builtin:openpipeline.<configuration.scope>.routing)builtin:openpipeline.<configuration.scope>.pipelines)builtin:openpipeline.<configuration.scope>.ingest-sources)After validating results, you can disable the classic processing rules that you migrated to OpenPipeline.
Data flow in OpenPipeline: The end‑to‑end path data follows from ingest through storage.
Processing in OpenPipeline: Pipelines, stages, and processors used to transform data.
Owner-based access control in OpenPipeline: Policies and scopes that manage pipeline access level and ownership.
OpenPipeline pipeline groups: Group setup for shared and enforced pipeline stages.
Configure a processing pipeline: Step‑by‑step pipeline configuration guidance.
OpenPipeline processing examples: Examples of OpenPipeline processor configuration that can be compared with the log processing examples to clarify conceptual differences.
Example — Rename attributes
Classic pipeline
USING(INOUT to_be_renamed, content)| FIELDS_RENAME(better_name: to_be_renamed)
OpenPipeline
Rename fields processor: Enter the field name that you want to be renamed and the new name.

Yes.
Data is processed according to the first matching route. As long as the classic processing rules are in place in your environment, the classic pipeline is accounted for in OpenPipeline and is the default processing mechanism. When you create new pipelines and associated routes, position the new route above the default route so that OpenPipeline processes data accordingly. If some data doesn't match the new route condition, it's still routed by the default route to the classic pipeline.
Yes.