At present, Amazon Data Firehose log stream is the primary log ingest method (push-based).
Amazon Data Firehose allows to create a fully managed serverless log pipeline, where the destination is the Dynatrace log ingest APIs. This architecture:
When using the recommended flow to create an AWS connection, the CloudFormation stack creates a Firehose stream per enabled monitored AWS region. You can modify the AWS Regions where logs can be pushed from by updating the AWS connection's CloudFormation main stack.
If you choose not to deploy AWS logs during the onboarding, update the AWS connection's CloudFormation main stack and set AWS logs to TRUE, then set the desired AWS Regions, and update the stack:
Connection Friendly name):TRUE), provide a combination of AWS Regions (separated by commas).While it's possible to forward logs into your Dynatrace environments without creating an AWS connection, we recommend onboarding logs (and any other signal) starting first with the creation of an AWS connection.
Forwarding logs from an AWS account without an existing AWS connection will not support log records entity linking or signal/entity enrichment.
CloudWatch Logs (available in Preview): This is a common and recommended log ingest type, where multiple AWS services are shipped with CloudWatch Logs support. In this type, the AWS service can send logs into a designated log group, which can then subscribe to the Dynatrace created Firehose data log stream allowing an end-to-end log ingest in minutes.
In this mode, Dynatrace SaaS attempts to map the log record producer (AWS resource) to the corresponding Dynatrace Smartscape node (or log record entity linking). Success in linking means the enrichment of the log records with the Dynatrace common attributes including AWS tags. Linking and enrichment unlocks multiple upstream platform use cases.
Direct push to Amazon Data Firehose stream (available in Preview): For certain AWS services, you can forward the logs directly into the Dynatrace-created Firehose stream.
In this mode, Dynatrace can't perform log record entity linking or log record enrichment.
AWS does not add the required rich resource metadata to its generated log records, so Dynatrace is unable to map the log record (AWS resource) to the Dynatrace entity (AWS resource), resulting in an inability to link and enrich those log records. We therefore highly recommend choosing the CloudWatch log groups as a source (when possible).
You can implement a workaround to allow the linking and enrichment of those log records, by using a Firehose stream per log.
This table lists supported AWS service resources for log records linking and enrichment—with CloudWatch as the logs source.
| Service name | Service resource | Log group name requirement |
|---|---|---|
| AWS Lambda | AWS::Lambda::Function | Use the default log group name:/aws/lambda/<function name>. |
| Amazon RDS | AWS::RDS::DBInstance | None |
| Amazon SNS | AWS::SNS::Topic | None |
| AWS App Runner | AWS::AppRunner::Service | None |
| Amazon EKS | AWS::EKS::Cluster | None |
| Amazon Route 53 | AWS::Route53::HostedZone | Must start with: /aws/route53/. |
| Amazon API Gateway | AWS::ApiGateway::RestApiAWS::ApiGatewayV2::Api | ApiGateway access logs name must matchAPI-Gateway-Access-Logs_<rest-api-id>/<stage-name> pattern.ApiGatewayV2 access logs name must start with: API-GatewayV2-Access-Logs. |
| AWS Elasticache | AWS::ElastiCache::CacheCluster | Must start with: /aws/elasticache/. |
| Amazon OpenSearch Service | AWS::OpenSearchService::Domain | None |
| Amazon Managed Streaming for Apache Kafka | AWS::MSK::ClusterAWS::KafkaConnect::Connector | MSK Broker logs name must start with: /aws/msk/broker.MSK Connector logs name must start with: /aws/msk/connector. |
We are continually working to add additional linkable sources. Your feedback is important to us.