At present, Amazon Data Firehose log stream is the primary logs ingest method (push-based).
Amazon Data Firehose allows to create a fully managed serverless log pipeline where the destination is the Dynatrace log ingest APIs. This architecture allows swift registration of AWS log producers, with not much effort. It also auto-scales to meet log throughput demand and can efficiently handle back-pressure with configurable buffers and smart retry policy.
When using the recommended flow to create an AWS connection, the CloudFormation stack creates a Firehose stream per enabled monitored AWS Region. You can modify the AWS Regions where logs can be pushed from by updating the AWS connection's cloudformation main stack.
If you choose not to deploy AWS logs during the onboarding, update the AWS connection's CloudFormation main stack and set AWS logs to TRUE, then set the desired AWS Regions, and update the stack:
Connection Friendly name):TRUE), provide a combination of AWS Regions (seperated by a comma).While it's possible to forward logs into your Dynatrace environments without creating an AWS connection, we do recommend to onboard logs (and any other signal) starting first with the creation of an AWS connection.
Forwarding logs from an AWS account without an existing AWS connection will not support log records entity linking nor signal/entity enrichment.
CloudWatch Logs (available in Preview): This is a common and recommended log ingest type, where multiple AWS services are shipped with the CloudWatch Logs support. In this type, the AWS service can send logs into a designated log group, which can then subscribe to the Dynatrace created Firehose data log stream allowing an end-to-end log ingest in minutes.
In this mode, Dynatrace SaaS is designed to attempt mapping the log record producer (AWS resource) to their corresponding Dynatrace Smartscape node (or log record entity linking). Success in linking means the enrichment of the log records with the Dynatrace common attributes including AWS tags. Linking and enrichment unlocks multiple upstream platform use-cases.
Direct push to Amazon Data Firehose stream (available in Preview): For certain AWS services, you can forward the logs directly into the Dynatrace created Firehose stream.
In this mode, Dynatrace can't perform a log record entity linking nor log record enrichment.
AWS does not add the required rich resource metadata inside its generated log records, therefore Dynatrace is unable to map the log record (AWS resource) to the Dynatrace entity (AWS resource), resulting in an inability to link and enrich those log records. Therefore, we highly recommend to choose the CloudWatch log groups as a source (when possible).
You can implement a workaround to allow the linking and enrichment of those log records, by using a Firehose stream per log.
See the list of supported AWS service resources for log records linking and enrichment—with CloudWatch as the logs source.
| Service name | Service resource | Log group name requirement |
|---|---|---|
| AWS Lambda | AWS::Lambda::Function | Use the default log group name:/aws/lambda/<function name>. |
| Amazon RDS | AWS::RDS::DBInstance | None |
| Amazon SNS | AWS::SNS::Topic | None |
| AWS App Runner | AWS::AppRunner::Service | None |
| Amazon EKS | AWS::EKS::Cluster | None |
| Amazon Route 53 | AWS::Route53::HostedZone | Must start with: /aws/route53/. |
| Amazon API Gateway | AWS::ApiGateway::RestApiAWS::ApiGatewayV2::Api | ApiGateway access logs name must matchAPI-Gateway-Access-Logs_<rest-api-id>/<stage-name> pattern.ApiGatewayV2 access logs name must start with: API-GatewayV2-Access-Logs. |
| AWS Elasticache | AWS::ElastiCache::CacheCluster | Must start with: /aws/elasticache/. |
| Amazon OpenSearch Service | AWS::OpenSearchService::Domain | None |
| Amazon Managed Streaming for Apache Kafka | AWS::MSK::ClusterAWS::KafkaConnect::Connector | MSK Broker logs name must start with: /aws/msk/broker.MSK Connector logs name must start with: /aws/msk/connector. |
We are constantly working to add additional linkable sources, your feedback is important to us.