Dynatrace OpenPipeline can reshape incoming data for better understanding, processing, and analysis. OpenPipeline processing is based on rules that you create and offers a flexible way of extracting value from raw records.
Source of ingestion for a configuration scope, collecting data from the provider into Dynatrace Platform, for example, API endpoints or OneAgent.
Assignation of data to a pipeline, based either on matching conditions (dynamic) or direct assignation (static).
Once data is ingested and routed, OpenPipeline processing occurs in pipelines. Each pipeline contains a set of processing instructions (processors), that are executed in an ordered sequence of stages and define how you want Dynatrace to structure, separate, and store your data. After a record is processed, it's sent to storage and is available for further analysis.
By default configuration scopes are processed in dedicated built-in pipelines. You can create new custom pipelines to group processing and extraction by technology or team.
Log and business event processing pipeline conditions are included in the built-in OpenPipeline pipelines.
Processing is based on available records, and doesn't take into account record enrichment from external services.
OpenPipeline processing for logs supports DQL only. If you use already the log processing pipeline, ensure your matchers are converted to DQL.
A stage is a phase in a pipeline sequence that focuses on a task, such as masking, filtering, processing, or extraction. Stages contain a predefined list of configurable processors, which define the task of the stage.
The following table is a comprehensive list of stages, ordered in the pipeline sequence of execution, specifying which processors are available and executed for each stage, for the supported configuration scopes.
Specific fields are excluded from matching and processing or retricted. To learn more, see Limits specific to fields.
Stage | Description | Processors in the stage | Executed processors | Supported data types |
---|---|---|---|---|
Processing | Prepare data for analysis and storage by parsing values into fields, transforming the schema, and filtering the data records. Fields are edited, and sensitive data is masked. |
| All matches | Logs, Events—Generic, Events—Davis events, Events—Davis, Events—SDLC events, Events—Security events (legacy), Security events (new) 1, Business events, Spans1 , Metrics |
Metric extraction | Extract metrics from the records that match the query. |
| All matches | Logs, Events—Generic, Events—SDLC events, Events—Security events (legacy), Security events (new)1, Business events, System events, User events, User sessions |
| All matches | Spans | ||
Data extraction | Extract a new record from a pipeline and re-ingest it as a different data type into another pipeline. |
| All matches | Logs, Events—Generic, Events—SDLC events, Events—Security events (legacy), Security events (new)1, Business events, System events, Spans1 |
Davis | Extract a new record from a pipeline and re-ingest it as a Davis events into another pipeline. |
| All matches | Logs, Events—Generic, Events—SDLC events, Events—Security events (legacy), Security events (new)1, Business events, System events, Spans1 |
Cost allocation | Advanced option to assign cost center usage to specific records that match a query. Make sure to review Cost Allocation documentation when choosing the best approach for your environment. |
| First match only | Logs, Events—Generic, Events—SDLC events, Events—Security events (legacy), Security events (new)1, Business events, System events, Spans1 |
Product allocation | Advanced option to assign product or application usage to specific records that match a query. Make sure to review Cost Allocation documentation when choosing the best approach for your environment. |
| First match only | Logs, Events—Generic, Events—SDLC events, Events—Security events (legacy), Security events (new)1, Business events, System events, Spans1 |
Permissions | Apply security context to the records that match the query. |
| First match only | Logs, Events—Generic, Events—Davis events, Events—Davis, Events—SDLC events, Events—Security events (legacy), Security events (new)1, Business events, Spans1, Metrics, User events, User sessions |
Storage | Assign records to the best-fit bucket. |
| First match only | Logs, Events—Generic, Events—Davis events, Events—Davis, Events—SDLC events, Events—Security events (legacy), Security events (new)1, Business events, Spans1 |
The data remains in its original, structured form. This is important for detailed analysis and troubleshooting, as it ensures that no information is lost or altered.
A processor is a pre-formatted processing instruction that focuses either on modifying (for example, by renaming or adding a new field) or extracting data (for example, by creating an event from a log line or extracting metrics).
While the processor format is predefined, it contains a configurable matcher and processing definition.
The following table lists alphabetically all available processors in a pipeline.
Processor
Description
Add fields
Adds fields with name and value.
Business event
Extracts fields into a new record and sends it to the business event table.
Counter metric
Returns the number of occurrences of a metric, from the records that match the query.
Davis event
Extracts fields into a new record and sends it to an event table.
DQL
Processes a subset of DQL. The output is formatted to string, number, bool, duration, timestamp, and respective arrays of those.
DPS Cost Allocation - Cost Center
Assigns cost center usage to a record via dt.cost.costcenter
by either copying the value from a field or setting it as a static string.
DPS Cost Allocation - Product
Assigns product or application usage to a record via dt.cost.product
by either copying the value from a field or setting it as a static string.
Drop record
Drops a record. The record is not retained.
No storage assignment
Skips storage assignment. The record is not retained.
Sampling aware counter metric
OneAgent might apply sampling to trace data before it's processed, according to Adaptive Traffic Management for distributed tracing. This span-specific processor supports sampling awareness when returning the number of metric occurrences, from the span records that match the query. Span aggregation and sampling awareness are configurable for all fields available in field extraction, except duration—aggregation of duration is automatically detected and handled configurable
Sampling aware value metric
OneAgent might apply sampling to trace data before it's processed, according to Adaptive Traffic Management for distributed tracing. This span-specific processor supports sampling awareness when returning the aggregated values of a metric, from the span records that match the query. Span aggregation and sampling awareness are configurable for all fields available in field extraction, except duration—aggregation of duration is automatically detected and handled configurable
Remove fields
Removes fields from the record.
Rename fields
Changes the name of fields.
Bucket assignment
Assigns a Grail bucket.
Value metric
Returns the aggregated values of a metric from the records that match the query.
Set dt.security_context
Sets the proper record-level access via dt.security_context
by either copying it from a field, setting it as a static string, or a static array that allows multiple values.
Technology bundle
Matches records for the selected technology and processeses them according to predefined context-sensitive DQL processing statements.