You can forward logs from Dynatrace to your cloud object storage via OpenPipeline. You can choose to forward logs
This article explains how to create forwarding configurations, choose to forward unprocessed or processed logs, and apply additional filtering to control which data is forwarded.
This article is for application owners integrating Dynatrace OpenPipeline processing and storage with company compliance standards and an established cloud‑storage strategy.
Log forwarding is an OpenPipeline data-flow step, exclusive to the log configuration scope. Use it to forward unprocessed or processed logs from OpenPipeline to supported cloud object storage for compliance, auditing, or external system integration.
Forwarding configurations define how Dynatrace should forward a set of logs to your cloud object storage. You can create multiple forwarding configurations and switch on/off, modify, or delete existing ones.
Each forwarding configuration specifies to send processed or unprocessed logs, the source, and the destination. It consists of a
Name: A unique custom identifier
Source: The existing ingest sources or pipelines which logs you want to forward. Depending on the source, the forwarding configuration forwards unprocessed or processed logs. If you choose to send from
Matching condition: The DQL query that defines the data set you want to forward.
Destination: The cloud object storage to which you want to forward your logs to. It's defined by
A connection with a Dynatrace AWS or Azure Connector pointing to a cloud object storage in the same region of your Dynatrace platform environment.
A valid cloud vendor storage identifier (the bucket name in AWS or the container URL in Azure)
Segmentation and bulk size
Logs to forward are grouped in batches of 100-10,000 and assigned a bulk pattern. The bulk is compressed to (GZIP) NDJSON format. Forwarded logs contain the pattern placeholders in their path in the following order:
Optional Date and time placeholders:
<DDMMYYYY><YYYYMMDD><HH><HHmmss.SSSS>Required Bulk identifier: <bulk-id>
Required Bulk format: .json.gz
If .gz is omitted, GZIP is still executed, and the file has a non-matching file extension.
Additional optional processing: Apply further processing to log records and forward only what the destination system requires. Available processors include DQL, Add fields, Remove fields, Drop, and Rename fields.
Dynatrace SaaS environment powered by Grail and AppEngine.
Dynatrace license with Log Analytics overview (DPS) capabilities
Environment-level settings:objects:read and settings:objects:write permissions for the connections.aws or connections.azure schema
Users with sufficient permissions can:
A valid AWS S3 or Azure Blob Storage in the same region of your Dynatrace platform environment
Depending on your cloud vendor, check the following prerequisites.
Amazon Web Services (AWS)
To write and connect AWS storage you need the following permissions in the AWS Console.
GetBucketLocationPutObjectMicrosoft Azure
The user account that operates on the Azure Blob Storage has been assigned the Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write permission.
You can either assign a predefined role, such as Storage Blob Data Contributor, or create a custom one with minimal permissions, as in the following JSON example.
{"id": "/subscriptions/e1412bf7-xxxx-xxxx-xxxx-f33ea37e3427/providers/Microsoft.Authorization/roleDefinitions/d93a93fd-xxxx-xxxx-xxxx-3830043e186a","properties": {"roleName": "Data Forwarding Role","description": "","assignableScopes": ["/subscriptions/e1412bf7-xxxx-xxxx-xxxx-f33ea37e3427"],"permissions": [{"actions": [],"notActions": [],"dataActions": ["Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write"],"notDataActions": []}]}}
You're familiar with the terms Dynatrace Connector and object storage.
You know that cost is associated with object storage, and costs will occur once you start sending data, depending on the retention time and the number of requests.
Access the Set up connection modal.
Via
Settings
Go to
Settings > Connections > your cloud vendor (
AWS or
Microsoft Azure ) > Connection.
Via OpenPipeline
When you define the forwarding configuration, select Create a new connection.
In Set up connection, enter a new connection name and select Save.
This action generates a connection ID.
Copy the connection ID.
Keep the modal open.
Depending on your cloud vendor, do the following.
In the AWS Console, add an IAM role using the AWS account as a trusted entity and the Settings ID as external ID.
This is the role that is assumed when using the AWS connection in Dynatrace.
The resulting trust policy looks as follows:
{"Version": "2012-10-17","Statement": [{"Effect": "Allow","Principal": {"AWS": "arn:aws:iam::314146291599:root"},"Action": "sts:AssumeRole","Condition": {"StringEquals": {"sts:ExternalId": "<the Connection ID in Dynatrace>"}}}]}
Use identity or resource-based permissions for resource access.
Once your IAM role is created and its trust policy is configured, copy your AWS Role ARN.
Dynatrace immediately verifies that the correct role is being assumed when you save the connection.
Your forwarding configuration is active by default. Logs passing through or entering an ingest source or a pipeline are forwarded to cloud object storage.
You learned how to set up a connection with your cloud vendor and how to create a new forwarding configuration in OpenPipeline. You can now start to forward unprocessed or processed logs from Dynatrace to your cloud object storage.
Use the following self-monitoring metrics to observe your forwarding configuration performance.
| Self-monitoring metric | Description | Dimensions |
|---|---|---|
dt.sfm.openpipeline.forwarding.successful_records | The number of records successfully forwarded. | forwarding.id, forwarding.destination, forwarding.name |
dt.sfm.openpipeline.forwarding.failed_records | The number of records that failed to be forwarded. | forwarding.id, forwarding.destination, forwarding.name, reason1 |
The reason dimension holds predefined values indicating possible errors, such as unauthorized, bucket_not_found, resource_unavailable, target_configuration_missing, and other.
You can query them in
Notebooks and
Dashboards.
// successtimeseries { sum=sum(dt.sfm.openpipeline.forwarding.successful_records) },by: { forwarding.destination, forwarding.name, forwarding.id }
// failuretimeseries { sum=sum(dt.sfm.openpipeline.forwarding.failed_records) },by: { forwarding.destination, forwarding.name, reason, forwarding.id }
Supported cloud vendors are limited to AWS S3 and Azure Blob Storage.
Preview Logs can be forwarded only to cloud vendors in the same region of your Dynatrace platform environment.
The maximum number of configurations is 1,000 per data type.
If the cloud storage is not reachable, Dynatrace automatically drops the log.
<bulk-id>) and end with the bulk format (.json.gz).