Processing

Dynatrace OpenPipeline can reshape incoming data for better understanding, processing, and analysis. OpenPipeline processing is based on rules that you create and offers a flexible way of extracting value from raw records.

Key terms

Ingest sources

Source of ingestion for a data type, collecting data from the provider into Dynatrace Platform, for example, API endpoints or OneAgent.

Routing

Assignation of data to a pipeline, based either on matching conditions (dynamic) or direct assignation (static).

Pipeline

Once data is ingested and routed, OpenPipeline processing occurs in pipelines. Each pipeline contains a set of processing instructions (processors), that are executed in an ordered sequence of stages and define how you want Dynatrace to structure, separate, and store your data. After a record is processed, it's sent to storage and is available for further analysis.

By default data types are processed in dedicated built-in pipelines. You can create new custom pipelines to group processing and extraction by technology or team.

Log and business event processing pipelines

Log and business event processing pipeline conditions are included in the built-in OpenPipeline pipelines.

Record enrichment

Processing is based on available records, and doesn't take into account record enrichment from external services.

Use cases

  • Prepare, transform, and store data in Grail.
  • Grant access to specific records.

Best practices

OpenPipeline processing for logs supports DQL only. If you use already the log processing pipeline, ensure your matchers are converted to DQL.

Stage

A stage is a phase in a pipeline sequence that focuses on a task, such as masking, filtering, processing, or extraction. Stages contain a predefined list of configurable processors, which define the task of the stage. All data types undergo the same stages, which are executed in a specific order.

Stages in a pipeline

The following table lists all stages available in a pipeline, ordered in the sequence of execution, specifying which processors are available and executed for each stage.

Stage
Description
Processors in the stage
Executed processors

Processing

Prepare data for analysis and storage by parsing values into fields, transforming the schema, and filtering the data records. Fields are edited, and sensitive data is masked.

  • DQL
  • Add fields
  • Remove fields
  • Rename fields

All matches

Metric extraction

Extract metrics from the records that match the query.

  • Counter metric
  • Value metric

All matches

Data extraction Logs

Extract and resend data into another pipeline.

  • Davis event
  • Business event

All matches

Permissions

Apply security context to the records that match the query.

  • Set dt.security_context

First match only

Storage

Assign records to the best-fit bucket.

  • Bucket assignment
  • No storage assignment

First match only

Processor

A processor is a pre-formatted processing instruction that focuses either on modifying (for example, by renaming or adding a new field) or extracting data (for example, by creating an event from a log line or extracting metrics).

While the processor format is predefined, it contains a configurable matcher and processing definition.

  • The matcher defines the target of a processor via a DQL query. It narrows down the available data to the specific set you want to process.
  • The processing definition instructs Dynatrace on how to transform or modify the data filtered by the matcher.

The following table lists alphabetically all available processors in a pipeline.

Processor
Description

Add fields

Adds fields with name and value.

Business event

Extracts fields into a new record and sends it to the business event table.

Counter metric

Returns the number of occurrences of a metric, from the records that match the query.

Davis event

Extracts fields into a new record and sends it to an event table.

DQL

Processes a subset of DQL. The output is formatted to string, number, bool, duration, timestamp, and respective arrays of those.

No storage assignment

Skips storage assignment.

Remove fields

Removes fields from the record.

Rename fields

Changes the name of fields.

Bucket assignment

Assigns a Grail bucket.

Value metric

Returns the aggregated values of a metric from the records that match the query.

Set dt.security_context

Sets the proper record-level access via dt.security_context by either copying it from a field, or setting it as a static string.