With OpenPipeline, you can ingest data in the Dynatrace platform from a wide variety of formats and providers, through ingest sources. Data is then routed to pipelines for processing, and stored in Grail buckets.

Collection of processing instructions to structure, separate, and store data.
Configuration scopes, such as logs and events, provide observability insights into the health, performance, and behavior of your system enabling teams to detect, diagnose, and resolve problems. Each configuration scope offers a different perspective because of its unique characteristics.
OpenPipeline provides a unified solution to configure ingestion and processing while ensuring flexibility based on the configuration scope specifics.
The following table lists configuration scopes, summarizing availability in OpenPipeline.
| Configuration scope | Availability |
|---|---|
| Business events | |
| Events (generic events) | |
| Events - Davis events | |
| Events - Davis problems | |
| Events - SDLC events | |
| Events - Security events (legacy) | Will deprecate |
| Logs | |
| Metrics | |
| Security events (new) | |
| Spans | |
| System events | Limited support1 |
| Topology | Planned |
| User events | |
| User sessions |
System events supported by OpenPipeline are limited to: App Lifecycle Notifications (event.kind == "AUDIT_EVENT" AND event.provider == "APP_REGISTRY"), Workflow Execution events (event.kind == "WORKFLOW_EVENT" AND event.provider == "AUTOMATION_ENGINE"), and ECC self-monitoring events (event.kind == "EXTENSIONS_EVENT").
Data reaches the Dynatrace platform via different ingestion sources, such as API endpoints, OneAgent, and extensions, which collect data from data providers. In OpenPipeline, they are defined by a name and a path (dt.openpipeline.source).
Once the records reach your Dynatrace SaaS environment via ingest sources, you can route it to a pipeline.
To learn the ingest sources available in OpenPipeline, see Ingest sources in OpenPipeline.
You can leverage:
Built-in ingest sources
Custom ingest sources
Custom ingest sources are available for events, excluding Davis problems and events. They support pre-processing and static routing.
Optional data processing that occurs after ingestion and before routing. By setting pre-processing, you can transform raw data into structured formats as soon as it reaches your Dynatrace SaaS environment. Pre-processed data is then routed to a pipeline and is available for further processing before storage. Note that pre-processing is available only for custom ingest sources.
Set up pre-processing to avoid creating complex matching conditions based on provider-specific data formats. This will help you streamline maintenance for routing and processing, for example, when you start ingesting data from a new provider.
After data is ingested (and optionally pre-processed), it's routed to pipelines. The route order is relevant—the position in the list establishes the order of execution. If no route matches the record, the record is routed via the Default route.
Routing is defined according to the following:
Configuration scope: Pipelines are specific to a configuration scope. Different configuration scopes are routed to different pipelines.
Ingest source: You can configure routing for each ingest source. Multiple ingest sources of the same configuration scope can be routed to the same pipeline.
Routing option
Dynamic routing: Data is routed based on a matching condition. The matching condition is a DQL query that defines the data set you want to route.
Static routing: Data is routed to a specific pipeline, which remains fixed unless manually updated. Static routing is available only for custom ingest sources.
If a record matches a different condition but you've already configured static routing for its custom ingest source, the match is skipped and data is routed directly to the pipeline you specified.
OpenPipeline processing occurs in pipelines containing instructions on how to structure, separate, and store your data. To learn more, see Processing in OpenPipeline.
Dynatrace Grail database provides a single unified storage solution for all your configuration scopes. OpenPipeline target storage are Grail buckets. You can leverage built-in buckets and, if available for the configuration scope, create new buckets with custom retention periods. Each bucket is assigned to a DQL database table. Assign permissions to user groups or single users to provide them with access to specific buckets and tables.
By default, OpenPipeline routes data into a built-in pipeline with target storage built-in Grail bucket of the configuration scope. You can configure storage assignment
Storage and retention for system events is not configurable.