Stream logs via Amazon Data Firehose (Logs Classic)

Log Monitoring Classic

Dynatrace integration with Amazon Data Firehose provides a simple and safe way to ingest AWS logs. To enable AWS log forwarding, you need to create Amazon Data Firehose instance and configure it with your Dynatrace environment as a destination. Then you can connect your CloudWatch log groups by creating a subscription filter or send logs directly to Data Firehose from services that support it (e.g. Amazon Managed Streaming for Apache Kafka). Data Firehose and other created cloud resources incur AWS costs according to standard AWS billing policy. See the Cloud log forwarding page to learn about all the options for AWS log ingestion.

Prerequisites

  1. Create an API token in your Dynatrace environment and enable the Ingest logs permission.

  2. Determine the API URL for your environment:

    • For Dynatrace SaaS recommended

      https://<your_environment_ID>.live.dynatrace.com

Set up Firehose logs streaming

You can set up an Amazon Data Firehose delivery stream with a CloudFormation template or in the AWS console. Check the instructions below.

If you choose another deployment method (for example, Terraform or custom script), use the full URL: https://<your_environment_ID>.live.dynatrace.com/api/v2/logs/ingest/aws_firehose in the Firehose HTTP endpoint destination configuration.

Stream logs from CloudWatch

After creating a Firehose delivery stream and IAM role, you need to subscribe to the CloudWatch log groups whose logs you want to forward to Dynatrace. You can subscribe to log groups using shell script or in the AWS console. See the instructions below.

Send logs from other services directly to Firehose

To configure logs not stored in CloudWatch for services that send them directly to Firehose, refer to specific service documentation, for example:

For logs from AWS services that are sent to S3—not Firehose or CloudWatch—see GitHub documentation.

View AWS logs

After configuring Data Firehose streaming, you can view and analyze AWS logs in Dynatrace: Go to Logs & Events or Notebooks, and filter for AWS logs. Logs ingested via Amazon Data Firehose will have aws.data_firehose.arn attribute set to ARN of Firehose that streamed the data into Dynatrace. Logs from AWS services with entity linking support will automatically be displayed in the Cloud application for in context analysis.

If you see logs coming in, you managed to configure AWS logs streaming successfully.

If there are no logs within 10 minutes, check out the Troubleshooting guide section of the page.

Amazon Data Firehose includes optional parameters (key-value pairs) in each HTTP call. These instance parameters can help you identify and manage your destinations since they're processed and added automatically to ingested log records as attributes.

Supported services

Service name
Log enrichment
Entity linking
AWS Lambda 1
Applicable
Applicable
AWS App Runner
Applicable
Applicable
AWS CloudTrail 2
Applicable
-
Amazon API Gateway
Applicable
-
Amazon SNS
Applicable
Applicable
Amazon RDS
Applicable
Applicable
All services that write to CloudWatch
Applicable
-
All Services that send logs to Data Firehose directly
-
-
1

You can modify the AWS Lambda log group name. For log enrichment, use the default log group name /aws/lambda/<function name>.

2

You can modify the AWS CloudTrail log group name. For log enrichment, start the log group name with aws-cloudtrail-logs.

Environment ActiveGate support

ActiveGate version 1.287+

By default, Environment ActiveGate listens for API requests on port 9999. However, currently, only port 443 is supported for HTTP endpoint data delivery for Amazon Data Firehose.

Your ActiveGate needs to be configured with a valid CA-signed SSL certificate to be able to receive logs from AWS Data Firehose.

To successfully deliver data from Amazon Data Firehose to the Environment ActiveGate API endpoint, we recommend setting up port forwarding from port 443 to 9999 on ActiveGate host.

Below we have included a few examples of such configurations. Consult the documentation specific to your operating system and networking solutions for details.

Example configurations

Amazon Linux, RedHat Linux

firewalld provides a dynamically managed firewall. See the documentation for details.

To add port forwarding with firewalld (note: this actions need to be done using the root account):

firewall-cmd --zone=public --add-forward-port=port=443:proto=tcp:toport=9999 --permanent
firewall-cmd --zone=public --add-port=9999/tcp --permanent

Ubuntu Linux

The Uncomplicated Firewall (ufw) is a frontend for iptables. See the documentation for details.

To add port forwarding with ufw (note: this actions need to be done using the root account):

  1. In the /etc/ufw/before.rules file, let’s add a NAT table after the filter table (the table that starts with *filter and ends with COMMIT):
*nat
:PREROUTING ACCEPT [0:0]
-A PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 9999
COMMIT
  1. Allow traffic through the 443 and 9999 ports:
ufw allow 443/tcp
ufw allow 9999/tcp
  1. Restart ufw.

Windows Server 2022

Network shell (netsh) is a command-line utility that allows you to configure and display the status of various network communications server roles and components. See the documentation for details.

To add port forwarding with netsh interface portproxy:

netsh interface portproxy add v4tov4 listenport=443 connectport=9999 connectaddress=<the current IP address of your computer>

Using the netsh interface portproxy add v4tov6/v6tov4/v6tov6 options, you can create port forwarding rules between IPv4 and IPv6 addresses.

Troubleshooting

In case the logs forwarded from Data Firehose are not available in your environment, follow the steps below:

  1. Verify that logs from CloudWatch are sent into Firehose. Check Firehose Delivery stream metrics (Incoming put requests, Incoming bytes). When there is no data set into Firehose, verify that CloudWatch logs group is producing current logs, verify that IAM role selected when creating subscription filter has permission to write logs into Firehose.
  2. Verify that logs are successfully sent from Firehose to Dynatrace. Check Firehose Delivery stream metrics (HTTP endpoint delivery success, records delivered to HTTP endpoint). In case of errors, check AWS Firehose CloudWatch logs and verify the Dynatrace API token and API URL.
  3. Verify that logs are accepted by Dynatrace. Verify Dynatrace self-monitoring metric in data explorer:
dsfm:active_gate.rest.request_count:filter(and(or(eq(operation,"POST /logs/ingest/aws_firehose")))):splitBy(response_code):sort(value(auto,descending)):limit(20)

There should be metric data, and the response\_code should only have the value 200.

Limitations

The ingest throughput is limited by Amazon Data Firehose. For more details, see Amazon Data Firehose Quota. Amazon can increase firehose limits on request.

AWS Firehose does not support connections through VPC for HTTP endpoints.