OpenPipeline is the Dynatrace solution for processing log data from various sources. It enables effortless data handling at any scale and format on the Dynatrace platform. Using OpenPipeline when processing logs in Dynatrace offers a powerful solution to manage, process, and analyze logs. This approach combines the traditional log processing capabilities with the advanced data handling features of OpenPipeline, to get deeper insights into your log data.
Who this is for
This article is intended for administrators and app users.
What you will learn
In this article, you will learn to process logs for enhanced observability, including filtering, enrichment, and routing.
Before you begin
OpenPipeline provides the following advantages:
Contextual data transformation: OpenPipeline extracts data with context and transforms it into more efficient formats, for example, converting logs to business events.
Unified processing language: DQL (Dynatrace Query Language) is used as a processing language, offering one syntax for all Dynatrace features and more advanced options for processing.
Pipeline concepts: Log ingest traffic can be split into different pipelines with dedicated processing, data and metric extraction, permissions, and storage.
Additonal processors: You can use additional processors such as fieldsAdd, fieldsRemove, and more. For a complete list, see the OpenPipeline processors.
Enhanced data extraction: Extract business events from logs with more data extraction options.
Increased limits: Benefit from increased default limits, including content size up to 524,288 bytes, attribute size up to 2,500 bytes, and up to 250 log attributes.
Improved performance and higher throughput.
The stages of log processing with OpenPipeline are the following:
Stage
Description
Processors in the stage
Executed processors
Supported data types
Processing
Prepare data for analysis and storage by parsing values into fields, transforming the schema, and filtering the data records. Fields are edited, and sensitive data is masked.
Extract metrics from the records that match the query.
Counter metric
Value metric
All matches
Logs, Events—Generic, Events—SDLC events, Events—Security events (legacy), Security events (new)1, Business events, System events, Spans14, User events, User sessions
Data extraction
Extract a new record from a pipeline and re-ingest it as a different data type into another pipeline.
Davis event
Business event
All matches
Logs, Events—Generic, Events—SDLC events, Events—Security events (legacy), Security events (new)1, Business events, System events, Spans1
Permissions
Apply security context to the records that match the query.
Set dt.security_context
First match only
Logs, Events—Generic, Events—Davis events, Events—Davis, Events—SDLC events, Events—Security events (legacy), Security events (new)1, Business events, Spans1, Metrics3, User events, User sessions
The data remains in its original, structured form. This is important for detailed analysis and troubleshooting, as it ensures that no information is lost or altered.
2
Processing for spans is restricted to the Remove fields and the Drop record processors, according to field limits for spans.
3
Specific metric fields are excluded from matching and processing. To learn more, see OpenPipeline limits.
4
Spans may be sampled by OneAgent and OpenTelemetry, however sampling and aggregation are not automatically included in the Metrics extraction stage.
Log and business event processing pipeline conditions are included in the built-in OpenPipeline pipelines. Processing is based on available records, and doesn't take into account record enrichment from external services.
If you have defined any new pipelines and your logs are routed to them by the dynamic route definition, they will not be processed by the classic pipeline. If logs aren't routed to any of the newly defined pipelines, they will be processed by the classic pipeline.
Steps
OpenPipeline provides built-in rules for common technologies and log formats, that you can manually enable.
To process logs, you need to enable dynamic routing. To learn how to enable it, see Route data.
Follow the steps below to enable them:
Go to Settings > Process and contextualize > OpenPipeline > Logs.
Select the Pipelines tab, and select Pipeline to add a new record.
Input a title for the pipeline.
Select Processor in the Processing tab, and choose Technology bundle.
Choose the technology for which you want to enable an OpenPipeline built-in rule.
Select Run sample data to test it, and view the result.
Select Save.
Follow the steps below to create a new rule:
Go to Settings > Process and contextualize > OpenPipeline > Logs.
Select the Pipelines tab, and select Pipeline to add a new record.
Input a title for the pipeline.
Select one of the tabs representing stages of log processing: Processing, Metric Extraction, Data extraction, Permission, or Storage.
Select Processor in and choose from the available processors.
Choose the technology for which you want to enable an OpenPipeline rule and provide the processing rule definition. The processing rule definition is a log processing instruction about how Dynatrace should transform or modify your log data.
Test the rule definition by providing a fragment of the sample log manually in the Paste a log / JSON sample text box. Make sure it's in JSON format. Any textual log data should be inserted into the content field of the JSON.
Select Run sample data to test the JSON sample, and view the result.
Select Save.
You can review or edit any pipeline by selecting the record and making the necessary changes.
If you haven't upgraded to OpenPipeline yet, Grail is not yet supported in your Cloud or region, or if you use Dynatrace version 1.293 and earlier, see Log processing.