Stream logs via Amazon Data Firehose (Logs Classic)
Log Monitoring Classic
Dynatrace integration with Amazon Data Firehose provides a simple and safe way to ingest AWS logs. To enable AWS log forwarding, you need to create Amazon Data Firehose instance and configure it with your Dynatrace environment as a destination. Then you can connect your CloudWatch log groups by creating a subscription filter or send logs directly to Data Firehose from services that support it (e.g. Amazon Managed Streaming for Apache Kafka). Data Firehose and other created cloud resources incur AWS costs according to standard AWS billing policy.
See the Cloud log forwarding page
to learn about all the options for AWS log ingestion.
Prerequisites
Create an API token in your Dynatrace environment and enable the Ingest logs permission.
Determine the API URL for your environment:
For Dynatrace SaaSrecommended
https://<your_environment_ID>.live.dynatrace.com
For ActiveGate (additional setup required)
https://<your_active_gate_IP_or_hostname>/e/<your_environment_ID>
You can set up an Amazon Data Firehose delivery stream with a CloudFormation template or in the AWS console. Check the instructions below.
If you choose another deployment method (for example, Terraform or custom script), use the full URL: https://<your_environment_ID>.live.dynatrace.com/api/v2/logs/ingest/aws_firehose in the Firehose HTTP endpoint destination configuration.
CloudFormation allows you to deploy an Amazon Data Firehose delivery stream using a single deployment command to create a stack that groups multiple AWS resources. This approach is faster and makes AWS resource management easier.
Deploy the Amazon Data Firehose delivery stream
To fetch the CloudFormation template and deploy it to your AWS account, run the command below.
Make sure to replace <your_API_URL> and <your_API_token> with your values.
Consult the parameters table that follows for more details.
Parameter
Description
Default value
DYNATRACE_API_URL
required Your API URL. See Prerequisites for instructions.
DYNATRACE_API_KEY
required Your API token. See Prerequisites for instructions.
STACK_NAME
required The name of your stack.
dynatrace-log-delivery-stream
If you have AWS CLI configured, you can use a Bash-compliant shell. Otherwise, you can use CloudShell available in the AWS console.
Confirm that the Amazon Data Firehose delivery stream was deployed correctly
To ensure that the Amazon Data Firehose delivery stream was deployed correctly, follow the steps below:
In the AWS console, go to CloudFormation.
Select the stack you have created in the CloudFormation deployment.
On the Events tab, make sure that all events have completed successfully and there are no failed events.
In Parameters tab, make sure that all parameters you have provided have correct values.
In the Output tab, take a note of the outputs:
CloudWatchSubscriptionFilterRoleArn - ARN of the IAM role to use when creating CloudWatch subscription filter;
FirehoseArn - ARN of the newly created Firehose delivery stream.
Create Amazon Data Firehose delivery stream
In the AWS console, open Amazon Data Firehose service.
Select Create Firehose stream.
Choose Source, and select Direct PUT.
Choose Destination, and select Dynatrace.
Enter Firehose stream name.
Make sure data transformation is disabled.
In Ingestion type, make sure Logs is selected.
In API token, enter your API token. See Prerequisites for instructions.
In API url, enter your API URL. See Prerequisites for instructions.
In Content encoding, make sure GZIP is selected.
In Retry duration, enter 900 seconds.
In Buffer hints, set the Buffer size to 1 MiB and Buffer interval to 60 seconds.
In Backup settings, make sure Failed data only is selected.
In S3 backup bucket, select Create.
In Create bucket section, enter the S3 backup bucket name, optionally choose a region, and then select Create bucket.
Browse and choose created S3 backup bucket.
Select Create Firehose Stream.
Create IAM role for streaming CloudWatch logs to Firehose
Data Firehose stream requires trust relationship with CloudWatch through an IAM role. This role needs to be created first, before creating CloudWatch logs subscription filter.
In the AWS console, go to IAM > Policies.
Select Create policy.
Switch to JSON editor and paste the JSON below as policy content.
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"VisualEditor0",
"Effect":"Allow",
"Action":[
"firehose:PutRecord","firehose:PutRecordBatch"
],
"Resource":"*"
}
]
}
You can replace * with your Firehose arn if you want to have more restrictive policy.
Select Next, enter the Policy name, optionally add tags, and select Create policy.
In the AWS console, go to IAM > Roles.
Select Create role.
Select Custom trust policy and paste the JSON below as policy content.
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"Statement1",
"Effect":"Allow",
"Principal":{
"Service":"logs.amazonaws.com"
},
"Action":"sts:AssumeRole"
}
]
}
Select Next and choose Firehose write policy created in step 4.
Select Next, enter Role name, optionally add tags, and select Create role.
Stream logs from CloudWatch
After creating a Firehose delivery stream and IAM role, you need to subscribe to the CloudWatch log groups whose logs you want to forward to Dynatrace.
You can subscribe to log groups using shell script or in the AWS console. See the instructions below.
To fetch the shell script, run the command below in a bash shell.
If you have AWS CLI configured, you can use a Bash-compliant shell. Otherwise, you can use CloudShell, which is available in the AWS console.
Subscribe by listing log group names
Usage recommendation: Use this option if the number of log groups you'd like to subscribe to is small.
To subscribe: Run the command below, making sure to replace <your_log_group_list> with a space-separated list of the log group names you want to subscribe to.
Example list:/aws/lambda/my-lambda /aws/apigateway/my-api
To simplify file creation, you can use the auto-discovery command below to list the names of all log groups in your account. You can adjust the list manually before subscribing.
Make sure to replace <your_log_groups_file> with the name of the file to which you want to redirect the output.
Usage recommendation: By default, you subscribe to all the logs in the log group. Use this option if you want to restrict the logs you subscribe to. See Filter and Pattern Syntax for details on the pattern syntax.
Limitation: You can use only two subscription filters per log group, so the possibility of creating multiple filters with different patterns is limited. If you create a subscription filter that exceeds the limit, an AWS LimitExceededException occurs.
To subscribe: Run the command below, making sure to replace <your_log_group_list> and <your_filter_pattern> with your values.
A space-separated list of log group names you want to subscribe to. For example: /aws/lambda/my-lambda /aws/apigateway/my-api.
--log-groups-from-file
LOG_GROUPS_FILE
A file listing the log groups you want to subscribe to. The file should contain each log group name on a separate line.
--filter-pattern
FILTER_PATTERN
If set, it allows you to subscribe to a filtered stream of logs.
You subscribe to all logs in the log group.
--stack-name
STACK_NAME
The name of the CloudFormation stack where you have deployed the resources.
dynatrace-aws-logs
--firehose-arn
FIREHOSE_ARN
Specify to which Amazon Data Firehose instance the logs should be streamed by providing its ARN (Amazon Resource Name). Usage recommendation: Set this option if you haven't used CloudFormation template for creating delivery stream.
It will be extracted from the output of the CloudFormation stack used in the deployment step: either the $DEFAULT_STACK_NAME default value or the one specified with the --stack-name <your_stack_name> option.
--role-arn
ROLE_ARN
The ARN of an IAM role that grants CloudWatch Logs permission to deliver ingested log events to the destination stream. Usage recommendation: Set this option if you haven't used CloudFormation template for creating delivery stream.
It will be extracted from the output of the CloudFormation stack used in the deployment step: either the $DEFAULT_STACK_NAME default value or the one specified with the --stack-name <your_stack_name> option.
Unsubscribe from log groups
If you don't want to forward logs to Dynatrace anymore, use one of the two options below to unsubscribe from log groups.
Unsubscribe by listing the log group names
Run the command below, making sure to replace <your_log_group_list> with a space-separated list of the log group names you want to unsubscribe from.
For logs from AWS services that are sent to S3—not Firehose or CloudWatch—see GitHub documentation.
View AWS logs
After configuring Data Firehose streaming, you can view and analyze AWS logs in Dynatrace: Go to Logs & Events or Notebooks, and filter for AWS logs. Logs ingested via Amazon Data Firehose will have aws.data_firehose.arn attribute set to ARN of Firehose that streamed the data into Dynatrace. Logs from AWS services with entity linking support will automatically be displayed in the Cloud application for in context analysis.
If you see logs coming in, you managed to configure AWS logs streaming successfully.
If there are no logs within 10 minutes, check out the Troubleshooting guide section of the page.
All Services that send logs to Data Firehose directly
-
-
1
You can modify the AWS Lambda log group name. For log enrichment, use the default log group name /aws/lambda/<function name>.
2
You can modify the AWS CloudTrail log group name. For log enrichment, start the log group name with aws-cloudtrail-logs.
Environment ActiveGate support
ActiveGate version 1.287+
By default, Environment ActiveGate listens for API requests on port 9999. However, currently, only port 443 is supported for HTTP endpoint data delivery for Amazon Data Firehose.
Your ActiveGate needs to be configured with a valid CA-signed SSL certificate to be able to receive logs from AWS Data Firehose.
To successfully deliver data from Amazon Data Firehose to the Environment ActiveGate API endpoint, we recommend setting up port forwarding from port 443 to 9999 on ActiveGate host.
Below we have included a few examples of such configurations. Consult the documentation specific to your operating system and networking solutions for details.
Example configurations
Amazon Linux, RedHat Linux
firewalld provides a dynamically managed firewall. See the documentation for details.
To add port forwarding with firewalld (note: this actions need to be done using the root account):
The Uncomplicated Firewall (ufw) is a frontend for iptables. See the documentation for details.
To add port forwarding with ufw (note: this actions need to be done using the root account):
In the /etc/ufw/before.rules file, let’s add a NAT table after the filter table (the table that starts with *filter and ends with COMMIT):
*nat
:PREROUTING ACCEPT [0:0]
-A PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 9999
COMMIT
Allow traffic through the 443 and 9999 ports:
ufw allow 443/tcp
ufw allow 9999/tcp
Restart ufw.
Windows Server 2022
Network shell (netsh) is a command-line utility that allows you to configure and display the status of various network communications server roles and components. See the documentation for details.
To add port forwarding with netsh interface portproxy:
netsh interface portproxy add v4tov4 listenport=443connectport=9999connectaddress=<the current IP address of your computer>
Using the netsh interface portproxy addv4tov6/v6tov4/v6tov6 options, you can create port forwarding rules between IPv4 and IPv6 addresses.
Troubleshooting
In case the logs forwarded from Data Firehose are not available in your environment, follow the steps below:
Verify that logs from CloudWatch are sent into Firehose. Check Firehose Delivery stream metrics (Incoming put requests, Incoming bytes). When there is no data set into Firehose, verify that CloudWatch logs group is producing current logs, verify that IAM role selected when creating subscription filter has permission to write logs into Firehose.
Verify that logs are successfully sent from Firehose to Dynatrace. Check Firehose Delivery stream metrics (HTTP endpoint delivery success, records delivered to HTTP endpoint). In case of errors, check AWS Firehose CloudWatch logs and verify the Dynatrace API token and API URL.
Verify that logs are accepted by Dynatrace. Verify Dynatrace self-monitoring metric in data explorer:
There should be metric data, and the response\_code should only have the value 200.
Limitations
The ingest throughput is limited by Amazon Data Firehose. For more details, see Amazon Data Firehose Quota. Amazon can increase firehose limits on request.
AWS Firehose does not support connections through VPC for HTTP endpoints.