Migrate classic processing rules to OpenPipeline

  • Latest Dynatrace
  • Upgrade guide
  • 8-min read
  • Published Mar 31, 2026

This article explains how you can manually migrate existing classic processing rules for logs and business events to OpenPipeline. It considers permission management and routing so that teams can get started with processing in OpenPipeline independently.

Why migrate?

  • Unified processing model: Manage processing for multiple signal types in OpenPipeline, including logs, business events, security events, spans, and more.
  • Scalable data handling: OpenPipeline handles high throughput at scale, supporting additional ingest sources and increased data volume.
  • Grail data flow: Processing is Grail-based only, providing a consistent approach from ingest through DQL processing to storage.
  • Pipeline groups: Enforce global processing while enabling teams to process their data independently and safely. For example, use pipeline groups for masking and permissions.
  • Granular ownership: With OpenPipeline scoped ownership and access to pipelines, teams can manage day‑to‑day processing, and administrators can retain control.

What is new?

  • Smartscape integration

  • Data forwarding

  • Cost and product allocation

  • Data extraction

  • Higher limits and flexibility

    The following table summarizes the key technical differences of processing logs via the log classic pipeline and OpenPipeline.

    Technical pointLog classic pipelineOpenPipeline

    Data type support

    String

    String, Number, and Boolean

    Content field limit

    512 kB

    10 MB

    Field name case sensitivity

    Case-insensitive

    Case-sensitive1

    Connect log data to traces

    Built-in rules

    Automatic2

    Technology parsers

    Built-in rules

    Preset bundles with broader technology support

    Query language

    LQL, DQL

    DQL3

    Metric dimension naming

    No

    Yes

    Metric-key log prefix

    Mandatory

    Optional

    1

    When you ingest logs via Log Monitoring API v2 - POST ingest logs, field names are automatically converted to lowercase after data is routed to the Classic pipeline.

    2

    The enrichment is done automatically, without requiring any user interaction.

What will you do?

You'll identify the data sets processed by classic processing rules and the users responsible for this data. Based on this information, you'll create

  • New policies to grant users access to pipelines
  • Pipelines and routes for data processing in OpenPipeline

Finally, you can disable classic processing rules.

Before you begin

Prerequisites

  • Dynatrace version 1.295+

  • Dynatrace SaaS environment powered by Grail and AppEngine

  • DPS license with log or business event capabilities

  • Permissions:

    • openpipeline:configurations:write
    • settings:objects:admin

Prior knowledge

New concepts

Pipeline

Collection of processors executed in an ordered sequence of stages to structure, separate, and store data.

Processor

Pre-formatted processing instruction that focuses either on modifying or extracting data. It contains a configurable matcher and processing definition.

Pipeline group

Set of team-managed pipelines to which a shared configuration applies. The shared configuration can restrict or mandate processing, enabling centralized processing across multiple pipelines.

Routing

Directing data to a pipeline, either based on matching conditions (dynamic) or by explicit pipeline selection (static).

How to migrate

To migrate classic processing rules to OpenPipeline

1. Identify the data sets currently processed by classic pipelines
  1. Go to Settings Settings > Process and contextualize > OpenPipeline and select your configuration scope (Logs or Business events).
  2. Go to Pipelines > Classic pipeline to view your classic processing rules.
  3. Work on one data stream at a time to reduce risk and simplify validation. For each rule, perform the following actions:
    1. Learn the matcher expressions and processor definitions. If sample data is used, export it for reuse during testing.
    2. Identify downstream consumers such as dashboards, alerts, metrics, and automations to understand potential impact.
    3. Identify the users or teams responsible for that data.
2. Create one or more pipelines per team
  1. In the OpenPipeline configuration scope, go to Pipelines > Pipeline to create a new custom pipeline.

  2. To convert the classic processing rules to OpenPipeline processors and stages,

    1. Choose which processors to adopt. Each processor has its own configuration.

      While the DQL processor can replace most classic use cases, prefer specialized processors where available, as they provide a more efficient approach to processing.

    2. Define the processor.

      • You can reuse the Matcher and the Sample data from the classic processing rule.
      • Convert the classic processing rule statement.
  3. Make sure to test configurations using sample data.

  4. Once you're satisfied with the result, select Save.

Your new pipeline is added to the table. The pipeline will take effect only once data is routed to it.

3. Configure routing to forward matching data to the new pipeline
  1. Go to Dynamic routing > Dynamic route to create a new route.
  2. Enter a matching condition for the route and choose the target pipeline.
  3. Select Add. The new route is added to the table and set to active by default.
  4. Position the route according to its priority, as the route order is relevant, and the first matching definition is applied.
  5. Optional You can deactivate the route before saving your changes to the table. For example, you can add a new deactivated route and activate it only after the new pipeline configuration is complete and validated.
  6. Select Save.

When the route is set to active, data that matches the condition gets routed to the pipeline you created and is processed accordingly, instead of the classic pipeline. You can verify processing results in Notebooks Notebooks.

Keep classic rules enabled until all data is reliably routed and processed by OpenPipeline.

4. Create policies that grant scoped access to pipelines
  1. Go to Account Management > Identity and Access Management.
  2. To grant users access to pipelines, create new policies with settings:objects:read and settings:objects:write permissions scoped to OpenPipeline schemas for log and business event pipelines.

You can start simple and create one policy per configuration scope (logs or business events).

Example:

ALLOW settings:objects:write WHERE settings:schemaId IN ("builtin:openpipeline.user.logs.pipelines", "builtin:openpipeline.business.events.pipelines")

Platform administrators typically retain settings:objects:admin, which grants access to all schemas, including routing.

For more information on OpenPipeline schemas, see the Settings API for each configuration scope:

  • Routing (builtin:openpipeline.<configuration.scope>.routing)
  • Pipelines (builtin:openpipeline.<configuration.scope>.pipelines)
  • Ingest sources (builtin:openpipeline.<configuration.scope>.ingest-sources)
View OpenPipeline Settings API schemas
5. Disable the corresponding classic processing rules

After validating results, you can disable the classic processing rules that you migrated to OpenPipeline.

  1. Go to Settings Settings > Process and contextualize > OpenPipeline and select your configuration scope (Logs or Business events).
  2. Go to Pipelines > Classic pipeline to view your classic processing rules.
  3. Turn off the classic processing rule.

Learn more

FAQ

Can classic pipelines and OpenPipeline run side by side?

Yes.

Data is processed according to the first matching route. As long as the classic processing rules are in place in your environment, the classic pipeline is accounted for in OpenPipeline and is the default processing mechanism. When you create new pipelines and associated routes, position the new route above the default route so that OpenPipeline processes data accordingly. If some data doesn't match the new route condition, it's still routed by the default route to the classic pipeline.

Can I enforce consistent processing?

Yes.

  • You can set a route to match all data and position it higher. OpenPipeline processes all data according to the specified pipeline first.
  • You can set up pipeline groups to enforce and restrict processing, for example, for bucket assignment and permissions, while still allowing teams to manage their own processing logic.
Related tags
Dynatrace Platform