Log processing with OpenPipeline

  • Latest Dynatrace
  • 4-min read

Dynatrace version 1.295+

OpenPipeline is the Dynatrace solution for processing log data from various sources. It enables effortless data handling at any scale and format on the Dynatrace platform. Using OpenPipeline when processing logs in Dynatrace offers a powerful solution to manage, process, and analyze logs. This approach combines the traditional log processing capabilities with the advanced data handling features of OpenPipeline, to get deeper insights into your log data.

Who this is for

This article is intended for administrators and app users.

What you will learn

In this article, you will learn to process logs for enhanced observability, including filtering, enrichment, and routing.

Before you begin

OpenPipeline provides the following advantages:

  • Contextual data transformation: OpenPipeline extracts data with context and transforms it into more efficient formats, for example, converting logs to business events.
  • Unified processing language: DQL (Dynatrace Query Language) is used as a processing language, offering one syntax for all Dynatrace features and more advanced options for processing.
  • Pipeline concepts: Log ingest traffic can be split into different pipelines with dedicated processing, data and metric extraction, permissions, and storage.
  • Additonal processors: You can use additional processors such as fieldsAdd, fieldsRemove, and more. For a complete list, see the OpenPipeline processors.
  • Enhanced data extraction: Extract business events from logs with more data extraction options.
  • Increased limits: Benefit from increased default limits, including content size up to 524,288 bytes, attribute size up to 2,500 bytes, and up to 250 log attributes.
  • Improved performance and higher throughput.

logs-openpipeline

The stages of log processing with OpenPipeline are the following:

Stage

Description

Processors in the stage

Executed processors

Supported data types

Processing

Prepare data for analysis and storage by parsing values into fields, transforming the schema, and filtering the data records. Fields are edited, and sensitive data is masked.

  • DQL
  • Add fields
  • Remove fields
  • Rename fields
  • Drop record

All matches

Logs, Events—Generic, Events—Davis events, Events—Davis, Events—SDLC events, Events—Security events (legacy), Security events (new) 1, Business events, Spans 1 2, Metrics 3

Metric extraction

Extract metrics from the records that match the query.

  • Counter metric
  • Value metric

All matches

Logs, Events—Generic, Events—SDLC events, Events—Security events (legacy), Security events (new)1, Business events, System events, Spans1 4, User events, User sessions

Data extraction

Extract a new record from a pipeline and re-ingest it as a different data type into another pipeline.

  • Davis event
  • Business event

All matches

Logs, Events—Generic, Events—SDLC events, Events—Security events (legacy), Security events (new)1, Business events, System events, Spans1

Permissions

Apply security context to the records that match the query.

  • Set dt.security_context

First match only

Logs, Events—Generic, Events—Davis events, Events—Davis, Events—SDLC events, Events—Security events (legacy), Security events (new)1, Business events, Spans1, Metrics3, User events, User sessions

Storage

Assign records to the best-fit bucket.

  • Bucket assignment
  • No storage assignment

First match only

Logs, Events—Generic, Events—Davis events, Events—Davis, Events—SDLC events, Events—Security events (legacy), Security events (new)1, Business events, Spans1

1

The data remains in its original, structured form. This is important for detailed analysis and troubleshooting, as it ensures that no information is lost or altered.

2

Processing for spans is restricted to the Remove fields and the Drop record processors, according to field limits for spans.

3

Specific metric fields are excluded from matching and processing. To learn more, see OpenPipeline limits.

4

Spans may be sampled by OneAgent and OpenTelemetry, however sampling and aggregation are not automatically included in the Metrics extraction stage.

Log and business event processing pipeline conditions are included in the built-in OpenPipeline pipelines. Processing is based on available records, and doesn't take into account record enrichment from external services.

If you have defined any new pipelines and your logs are routed to them by the dynamic route definition, they will not be processed by the classic pipeline. If logs aren't routed to any of the newly defined pipelines, they will be processed by the classic pipeline.

Steps

OpenPipeline provides built-in rules for common technologies and log formats, that you can manually enable.

To process logs, you need to enable dynamic routing. To learn how to enable it, see Route data.

Follow the steps below to enable them:

  1. Go to Settings Settings > Process and contextualize > OpenPipeline > Logs.
  2. Select the Pipelines tab, and select AddPipeline to add a new record.
  3. Input a title for the pipeline.
  4. Select AddProcessor in the Processing tab, and choose Technology bundle.
  5. Choose the technology for which you want to enable an OpenPipeline built-in rule.
  6. Select Run sample data to test it, and view the result.
  7. Select Save.

Follow the steps below to create a new rule:

  1. Go to Settings Settings > Process and contextualize > OpenPipeline > Logs.
  2. Select the Pipelines tab, and select AddPipeline to add a new record.
  3. Input a title for the pipeline.
  4. Select one of the tabs representing stages of log processing: Processing, Metric Extraction, Data extraction, Permission, or Storage.
  5. Select AddProcessor in and choose from the available processors.
  6. Choose the technology for which you want to enable an OpenPipeline rule and provide the processing rule definition. The processing rule definition is a log processing instruction about how Dynatrace should transform or modify your log data.
  7. Test the rule definition by providing a fragment of the sample log manually in the Paste a log / JSON sample text box. Make sure it's in JSON format. Any textual log data should be inserted into the content field of the JSON.
  8. Select Run sample data to test the JSON sample, and view the result.
  9. Select Save.

You can review or edit any pipeline by selecting the record and making the necessary changes.

If you haven't upgraded to OpenPipeline yet, Grail is not yet supported in your Cloud or region, or if you use Dynatrace version 1.293 and earlier, see Log processing.

Related tags
Log Analytics