Configure a processing pipeline

Manage data-type configuration of ingestion endpoints, dynamic routes, and pipelines via OpenPipeline OpenPipeline app (new).

Who this is for

This article is intended for administrators controlling ingestion configuration, data storage and enrichment, and transformation policies.

What you will learn

In this article, you'll learn how to create a new configuration for data records via OpenPipeline OpenPipeline app (new), from ingestion configuration to Grail storage.

Before you begin

  • Dynatrace SaaS environment powered by Grail and AppEngine.
  • You have both openpipeline:configurations:write and openpipeline:configurations:read permissions. To learn how to set up the permissions, see Permissions in Grail.
  • If you already use the log processing pipeline, ensure the matching conditions are converted to DQL.
Ingest sources

Source of ingestion for a data type, collecting data from the provider into Dynatrace Platform, for example, API endpoints or OneAgent.

Routing

Assignation of data to a pipeline, based either on matching conditions (dynamic routing) or directly configured (static).

Pipeline

Collection of processing instructions to structure, separate, and store data.

Stage

Phase in a pipeline sequence, focused on a task and defined by processors.

Processor

Pre-formatted processing instruction.

Steps

Step 1 Configure a pipeline

OpenPipeline stores data in Grail buckets. If you need a bucket with specific permissions or custom data retention, create a custom bucket.

Pipelines lists built-in and custom pipelines for a data type in your environment. To configure your own pipelines to group processing and extraction by technology or team

  1. Go to OpenPipeline OpenPipeline app (new) and select a data type.
  2. Select Pipelines > Add Pipeline to add a new pipeline.
  3. required Enter the pipeline name.
  4. For each stage, configure the processors.
    1. Select Add Processor and choose a processor.
    2. For each processor, specify the name and matching condition. Additional required fields vary based on the processor and are specified in the user interface.
  5. Select Save.

Successfully configured pipelines are displayed in the pipelines list. Select More actions > Edit to modify a custom pipeline.

Step 2 optional Configure custom ingest sources

Ingest sources lists built-in and custom ingest sources for a data type in your environment. To configure your own ingest sources, such as API endpoints for a service, technology, or team,

  1. Go to OpenPipeline OpenPipeline app (new) and select a data type.
  2. Select Ingest sources > Add Source.
  3. Enter a source name and the path URI, and choose a routing option.
    • To route data statically to a pipeline, specify the pipeline name, then proceed with the Ingest sources setup.
    • To route data dynamically, complete the Ingest sources setup and then specify the matching conditions.
  4. optional To configure storage for an ingest source, expand Advanced options and choose an existing Grail bucket.
  5. optional To pre-process data before it's routed to a pipeline, go to Pre-processing and set the processors. For each processor, specify a name and a matching condition. Additional required fields vary based on the processor and are specified in the user interface.
  6. Select Save.

Successfully configured custom ingest sources are displayed in the ingest sources list. Select More actions > Edit to modify a custom ingest source.

Step 3 Dynamically route data to a pipeline

You can route incoming data to pipelines based on any field value or the ingest source. To route data dynamically

  1. Go to OpenPipeline OpenPipeline app (new) and select a data type.
  2. Select Dynamic routing > Add Dynamic route.
  3. Enter a matching condition.
  4. Select Save.

Successfully configured routes are displayed in the dynamic routing list. Select More actions > Edit to modify a routing configuration.

Conclusion

You have configured ingest sources, routing, and processing for records of a data type via OpenPipeline OpenPipeline app (new). Once you start ingesting, your data is processed as configured, stored in a Grail bucket, and available for analysis via Grail capabilities.