Log forwarding via OpenPipeline

  • Latest Dynatrace
  • How-to guide
  • 10-min read
  • Preview

You can forward logs from Dynatrace to your cloud object storage via OpenPipeline. You can choose to forward logs

  • Unprocessed—immediately after they're ingested in Dynatrace.
  • Processed according to your pipeline configuration.

This article explains how to create forwarding configurations, choose to forward unprocessed or processed logs, and apply additional filtering to control which data is forwarded.

This article is for application owners integrating Dynatrace OpenPipeline processing and storage with company compliance standards and an established cloud‑storage strategy.

Overview

Log forwarding is an OpenPipeline data-flow step, exclusive to the log configuration scope. Use it to forward unprocessed or processed logs from OpenPipeline to supported cloud object storage for compliance, auditing, or external system integration.

Forwarding configuration

Forwarding configurations define how Dynatrace should forward a set of logs to your cloud object storage. You can create multiple forwarding configurations and switch on/off, modify, or delete existing ones.

Each forwarding configuration specifies to send processed or unprocessed logs, the source, and the destination. It consists of a

  • Name: A unique custom identifier

  • Source: The existing ingest sources or pipelines which logs you want to forward. Depending on the source, the forwarding configuration forwards unprocessed or processed logs. If you choose to send from

    • Ingest sources, unprocessed logs are forwarded right after ingest, before any processing is applied.
    • Pipelines, processed logs are forwarded after processing is applied and before storage.
  • Matching condition: The DQL query that defines the data set you want to forward.

  • Destination: The cloud object storage to which you want to forward your logs to. It's defined by

    • A connection with a Dynatrace AWS or Azure Connector pointing to a cloud object storage in the same region of your Dynatrace platform environment.

    • A valid cloud vendor storage identifier (the bucket name in AWS or the container URL in Azure)

    • Segmentation and bulk size

      Logs to forward are grouped in batches of 100-10,000 and assigned a bulk pattern. The bulk is compressed to (GZIP) NDJSON format. Forwarded logs contain the pattern placeholders in their path in the following order:

      • Optional Date and time placeholders:

        • <DDMMYYYY>
        • <YYYYMMDD>
        • <HH>
        • <HHmmss.SSSS>
      • Required Bulk identifier: <bulk-id>

      • Required Bulk format: .json.gz

        If .gz is omitted, GZIP is still executed, and the file has a non-matching file extension.

  • Additional optional processing: Apply further processing to log records and forward only what the destination system requires. Available processors include DQL, Add fields, Remove fields, Drop, and Rename fields.

Use cases

  • Forward logs to external cloud storage for compliance, auditing, and long‑term retention.
  • Maintain flexible storage options across diverse environments.
  • Support staged onboarding into Dynatrace by forwarding logs before or after Dynatrace processing.
  • Choose to send unprocessed logs after ingest or processed logs before storage.
  • Retain logs in Grail and apply filtering to forward only what external systems require.
  • Secure and control log forwarding through hyperscaler credentials and fine‑grained access permissions.

Prerequisites

  • Dynatrace SaaS environment powered by Grail and AppEngine.

  • Dynatrace license with Log Analytics overview (DPS) capabilities

  • Environment-level settings:objects:read and settings:objects:write permissions for the connections.aws or connections.azure schema

    Users with sufficient permissions can:

    • View existing configurations.
    • View, create, and edit forwarding configurations.
    • View hints about pipeline or ingest source data being forwarded.
  • A valid AWS S3 or Azure Blob Storage in the same region of your Dynatrace platform environment

  • Depending on your cloud vendor, check the following prerequisites.

    • Amazon Web Services (AWS)

      To write and connect AWS storage you need the following permissions in the AWS Console.

      • GetBucketLocation
      • PutObject
    • Microsoft Azure

      The user account that operates on the Azure Blob Storage has been assigned the Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write permission.

      You can either assign a predefined role, such as Storage Blob Data Contributor, or create a custom one with minimal permissions, as in the following JSON example.

      {
      "id": "/subscriptions/e1412bf7-xxxx-xxxx-xxxx-f33ea37e3427/providers/Microsoft.Authorization/roleDefinitions/d93a93fd-xxxx-xxxx-xxxx-3830043e186a",
      "properties": {
      "roleName": "Data Forwarding Role",
      "description": "",
      "assignableScopes": [
      "/subscriptions/e1412bf7-xxxx-xxxx-xxxx-f33ea37e3427"
      ],
      "permissions": [
      {
      "actions": [],
      "notActions": [],
      "dataActions": [
      "Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write"
      ],
      "notDataActions": []
      }
      ]
      }
      }
  • You're familiar with the terms Dynatrace Connector and object storage.

  • You know that cost is associated with object storage, and costs will occur once you start sending data, depending on the retention time and the number of requests.

How-to

Create a connection ID

  1. Access the Set up connection modal.

    • Via Settings Settings

      Go to Settings Settings > Connections > your cloud vendor ( AWS Connector AWS or Microsoft Azure Connector Microsoft Azure ) > Connection.

    • Via OpenPipeline

      When you define the forwarding configuration, select Create a new connection.

  2. In Set up connection, enter a new connection name and select Save.

    This action generates a connection ID.

  3. Copy the connection ID.

  4. Keep the modal open.

Connect your cloud vendor

Depending on your cloud vendor, do the following.

  1. In the AWS Console, add an IAM role using the AWS account as a trusted entity and the Settings ID as external ID.

    This is the role that is assumed when using the AWS connection in Dynatrace.

    The resulting trust policy looks as follows:

    {
    "Version": "2012-10-17",
    "Statement": [
    {
    "Effect": "Allow",
    "Principal": {
    "AWS": "arn:aws:iam::314146291599:root"
    },
    "Action": "sts:AssumeRole",
    "Condition": {
    "StringEquals": {
    "sts:ExternalId": "<the Connection ID in Dynatrace>"
    }
    }
    }
    ]
    }
  2. Use identity or resource-based permissions for resource access.

  3. Once your IAM role is created and its trust policy is configured, copy your AWS Role ARN.

Connect Dynatrace

  1. In Dynatrace, paste the value from your clound vendor into the Set up connection modal corresponding field.
  2. Select Save.

Dynatrace immediately verifies that the correct role is being assumed when you save the connection.

Define a forwarding configuration

  1. Go to Settings Settings > Process and contextualize > OpenPipeline > Logs > Forwarding > Forward.
  2. Define the source.
    1. Enter the forwarding configuration name.
    2. Choose what you want to forward:
      • The source type: From and ingest source (unprocessed records) or From a pipeline (processed records)
      • One or multiple sources from the available ingest sources or pipelines.
    3. Enter the matching condition.
    4. Select Next.
  3. Define the destination.
    1. Select the cloud vendor.
    2. Select a connection. Select Create a new connection to create a new connection.
    3. Enter the cloud vendor storage identifier (the bucket name in AWS or the container URL in Azure).
    4. Select Next.
  4. Define the segmentation and bulk size.
    1. Enter a bulk pattern.
    2. Enter a bulk size.
    3. Select Next.
  5. If you want to further filter logs to forward, select Add processor and configure a processor.
  6. Select Finish.

Your forwarding configuration is active by default. Logs passing through or entering an ingest source or a pipeline are forwarded to cloud object storage.

Next steps

You learned how to set up a connection with your cloud vendor and how to create a new forwarding configuration in OpenPipeline. You can now start to forward unprocessed or processed logs from Dynatrace to your cloud object storage.

Use the following self-monitoring metrics to observe your forwarding configuration performance.

Self-monitoring metricDescriptionDimensions
dt.sfm.openpipeline.forwarding.successful_recordsThe number of records successfully forwarded.forwarding.id, forwarding.destination, forwarding.name
dt.sfm.openpipeline.forwarding.failed_recordsThe number of records that failed to be forwarded.forwarding.id, forwarding.destination, forwarding.name, reason1
1

The reason dimension holds predefined values indicating possible errors, such as unauthorized, bucket_not_found, resource_unavailable, target_configuration_missing, and other.

You can query them in Notebooks Notebooks and Dashboards Dashboards.

// success
timeseries { sum=sum(dt.sfm.openpipeline.forwarding.successful_records) },
by: { forwarding.destination, forwarding.name, forwarding.id }
// failure
timeseries { sum=sum(dt.sfm.openpipeline.forwarding.failed_records) },
by: { forwarding.destination, forwarding.name, reason, forwarding.id }

Limits

  • Supported cloud vendors are limited to AWS S3 and Azure Blob Storage.

  • Preview Logs can be forwarded only to cloud vendors in the same region of your Dynatrace platform environment.

  • The maximum number of configurations is 1,000 per data type.

  • If the cloud storage is not reachable, Dynatrace automatically drops the log.

  • Logs are forwarded in bulks.
    • The bulk size is a minimum of 100 and a maximum of 10,000 unique records.
    • The bulk pattern must contain the bulk identifier (<bulk-id>) and end with the bulk format (.json.gz).
  • Dynatrace forwards unencrypted logs in NDJSON-GZIP file format.
Related tags
Dynatrace Platform