Configure a processing pipeline
Manage data-type configuration of ingestion endpoints, dynamic routes, and pipelines via OpenPipeline .
Who this is for
This article is intended for administrators controlling ingestion configuration, data storage and enrichment, and transformation policies.
What you will learn
In this article, you'll learn how to create a new configuration for data records via OpenPipeline , from ingestion configuration to Grail storage.
Before you begin
- Dynatrace SaaS environment powered by Grail and AppEngine.
- You have both
openpipeline:configurations:write
andopenpipeline:configurations:read
permissions. To learn how to set up the permissions, see Permissions in Grail. - If you already use the log processing pipeline, ensure the matching conditions are converted to DQL.
- Ingest sources
Source of ingestion for a data type, collecting data from the provider into Dynatrace Platform, for example, API endpoints or OneAgent.
- Routing
Assignation of data to a pipeline, based either on matching conditions (dynamic routing) or directly configured (static).
- Pipeline
Collection of processing instructions to structure, separate, and store data.
- Stage
Phase in a pipeline sequence, focused on a task and defined by processors.
- Processor
Pre-formatted processing instruction.
Steps
Configure a pipeline
OpenPipeline stores data in Grail buckets. If you need a bucket with specific permissions or custom data retention, create a custom bucket.
Pipelines lists built-in and custom pipelines for a data type in your environment. To configure your own pipelines to group processing and extraction by technology or team
- Go to OpenPipeline and select a data type.
- Select Pipelines > Pipeline to add a new pipeline.
- required Enter the pipeline name.
- For each stage, configure the processors.
- Select Processor and choose a processor.
- For each processor, specify the name and matching condition. Additional required fields vary based on the processor and are specified in the user interface.
- Select Save.
Successfully configured pipelines are displayed in the pipelines list. Select > Edit to modify a custom pipeline.
Configure custom ingest sources
Ingest sources lists built-in and custom ingest sources for a data type in your environment. To configure your own ingest sources, such as API endpoints for a service, technology, or team,
- Go to OpenPipeline and select a data type.
- Select Ingest sources > Source.
- Enter a source name and the path URI, and choose a routing option.
- To route data statically to a pipeline, specify the pipeline name, then proceed with the Ingest sources setup.
- To route data dynamically, complete the Ingest sources setup and then specify the matching conditions.
- optional To configure storage for an ingest source, expand Advanced options and choose an existing Grail bucket.
- optional To pre-process data before it's routed to a pipeline, go to Pre-processing and set the processors. For each processor, specify a name and a matching condition. Additional required fields vary based on the processor and are specified in the user interface.
- Select Save.
Successfully configured custom ingest sources are displayed in the ingest sources list. Select > Edit to modify a custom ingest source.
Dynamically route data to a pipeline
You can route incoming data to pipelines based on any field value or the ingest source. To route data dynamically
- Go to OpenPipeline and select a data type.
- Select Dynamic routing > Dynamic route.
- Enter a matching condition.
- Select Save.
Successfully configured routes are displayed in the dynamic routing list. Select > Edit to modify a routing configuration.
Conclusion
You have configured ingest sources, routing, and processing for records of a data type via OpenPipeline . Once you start ingesting, your data is processed as configured, stored in a Grail bucket, and available for analysis via Grail capabilities.