Dynatrace provides you with a dedicated AWS Lambda layer that contains the Dynatrace extension for AWS Lambda. You need to add the publicly available layer for your runtime and region to your function. Then, based on your configuration method, Dynatrace provides a template or configuration for your AWS Lambda function.
Activate AWS Lambda
Choose a configuration method
Specify a Dynatrace API endpoint
Enable Real User Monitoring
Define an AWS layer name
Deployment
Configuration options
Dynatrace AWS integration
A new configuration of the memory size affects the amount of virtual CPU available to the function; to learn more about it, see Monitoring overhead below.
To get started
The Dynatrace Lambda agent is distributed as a layer that can be enabled and configured manually or using well known Infrastructure as Code (IaC) solutions.
On the Enable Monitoring for AWS Lambda Functions page, use the How will you configure your AWS Lambda functions? list to select your preferred method, and then make sure you set all properties for the selected method before copying the generated configuration snippets.
If you select this method, Dynatrace provides you with:
dtconfig.json
file in the root folder of your Lambda deploymentWhen using this method, make sure that you add the Dynatrace Lambda layer to your function. You can do this through the AWS console (Add layer > Specify an ARN and paste the ARN displayed on the deployment page) or by using an automated solution of your choice.
Enter environment variables via the AWS Console
Enter the Lambda layer ARN via the AWS Console
When using this method, make sure that you add the Dynatrace Lambda layer to your function. The layer, as well as the environment variables, can be set either manually through the AWS console (Add layer > Specify an ARN and paste the ARN displayed on the deployment page) or by using an automated solution of your choice.
Client-side decryption of environment variables (Security in Transit) is not supported.
If you select this method, Dynatrace provides you with:
Values to define environment variables for the AWS Lambda functions that you want to monitor
Lambda layer ARN
Terraform is a popular Infrastructure as Code (IaC) solution. If you select this method, Dynatrace provides you with:
The AWS Serverless Application Model (SAM) is an open-source framework for building serverless applications.
If you select this method, Dynatrace provides you with a template to define the AWS Lambda function. This includes all the configuration that you need to integrate the Dynatrace AWS Lambda extension.
The Serverless Application option is a framework for deploying serverless stacks.
If you select this method, Dynatrace provides you with a template to define the AWS Lambda function. This includes all the configuration that you need to integrate the Dynatrace AWS Lambda extension.
AWS CloudFormation is an IaC solution that enables provisioning of a wide range of AWS services.
If you select this method, Dynatrace provides you with a template to define the AWS Lambda function. This includes all the configuration that you need to integrate the Dynatrace AWS Lambda extension.
This is an optional step that enables you to specify a Dynatrace API endpoint to which monitoring data will be sent.
The typical scenario is to deploy a Dynatrace ActiveGate in close proximity (same region) to the Lambda functions that you want to monitor in order to reduce network latency, which can impact the execution and cold start time of your Lambda functions for (usually one) network request by the agent per Lambda invocation (which happens at the end of the invocation). See #monitoring-overhead section for typical overhead numbers.
This is an optional step to use Real User Monitoring (RUM), which provides you with deep insights into user actions and performance via the browser or in mobile apps.
Make sure the x-dtc
header is allowed in the CORS settings of your monitored Lambda functions.
RUM for Lambda functions requires a specific header (x-dtc
) to be sent with XHR calls to AWS. To enable it, the CORS settings of your AWS deployment must allow the x-dtc
header during preflight (OPTIONS
) requests. To configure CORS and allow the x-dtc
header for your specific setup, see Enable CORS on a resource using the API Gateway console in AWS documentation.
To configure the x-dtc
header for calls to your Lambda functions
XHR
.TheAwsUniqueId.execute-api.us-east-1.amazonaws.com
If requests start failing after enabling this option, review your CORS settings. To learn how to configure CORS, see Enable CORS on a resource using the API Gateway console in AWS documentation.
Select the AWS region and the runtime of the Lambda function to be monitored. These settings are required to provide the correct layer ARN.
Copy the configuration snippets into your deployment and use your deployment method of choice to enable the layer and set the configuration for your Lambda functions.
If inbound (non-XHR) requests to your Lambda functions are not connected to the calling application, configure the API Gateway to pass through the Dynatrace tag. To do this, enable Use Lambda Proxy Integration on the Integration Request configuration page of the API Gateway.
If the API Gateway is configured from the Lambda configuration page, this setting will be enabled by default. For more information, see Enable CORS on a resource using the API Gateway console.
AWS Lambda also supports non-proxy integration, which—without some additional configuration—prevents Dynatrace from
Node.jsPython To make tracing calls from other monitored applications/RUM detection work in this scenario, create a custom mapping template in the integration requests configuration.
In the AWS API Gateway Console, go to Resources and select a request method (for example, GET).
Select Mapping Templates and then select Add mapping template.
Add the following content to the template:
{"path": "$context.path","httpMethod": "$context.httpMethod","headers": {#foreach($param in ["x-dynatrace", "traceparent", "tracestate", "x-dtc", "referer", "host", "x-forwarded-proto", "x-forwarded-for", "x-forwarded-port"])"$param": "$util.escapeJavaScript($input.params().header.get($param))"#if($foreach.hasNext),#end#end },"requestContext": {"stage": "$context.stage"}}
The x-dtc
header is specific to tracing RUM scenarios, whereas the remaining headers are generally needed to link traces together and extract relevant information, such as web request metadata.
Select Save to save your configuration.
Redeploy your API.
OneAgent version 1.295+
Instead of specifying the authentication token explicitly in the configuration, you can configure OneAgent to fetch a token stored in AWS Secrets Manager.
secretsmanager:GetSecretValue
permission for the authentication token secret ARN to the Lambda function monitored by OneAgent. For details, see Authentication and access control for AWS Secrets Manager in the AWS Secrets Manager documentation.AWSCURRENT
label). For details, see What's in a Secrets Manager secret? in the AWS Secrets Manager documentation.To fetch the token for a tracing connection, set the token secret ARN either to the environment variable DT_CONNECTION_AUTH_TOKEN_SECRETS_MANAGER_ARN
or the JSON property Connection.AuthTokenSecretsManagerArn
.
This option always overrides DT_CONNECTION_AUTH_TOKEN
(Connection.AuthToken
). If the fetch fails, OneAgent won't be able to export trace data.
A fetch accesses AWS Secrets Manager only once, during the Lambda function's initialization phase; this causes an increase of the Lambda function's cold start duration.
The Node.js and Python layers use the AWS SDK version provided by the AWS Lambda runtime to access the secret.
To fetch the token for log collection, set another fetch.
One of the important metrics for Lambda is the frequency of cold starts. A cold start happens when a new instance of a Lambda function is invoked. Such cold starts take longer and add latency to your requests.
A high cold-start frequency can indicate errors or an uneven load pattern that can be mitigated using provisioned concurrency. Dynatrace reports such cold starts as a property on the distributed trace.
To analyze cold starts, select View all requests on the Lambda service details page.
In the request filter, select Function cold start in the Request property section.
This displays a page that you can filter by invocations containing Only cold start or No cold start.
Enabling monitoring unavoidably induces overhead to the monitored function execution. Overhead depends on several factors, such as function runtime technology, configuration, and concrete function characteristics such as code size or execution duration and complexity.
The amount of memory configured for a function directly impacts the compute resources assigned to the function instance. For more details, see Memory and computing power.
The worst-case scenario on measured overhead is a function with an empty function handler and minimum memory configuration.
For the minimum memory configuration requirement, see Requirement for Java Lambda functions.
Latency depends on the function implementation, but is typically less than 10%. This means that the time it takes until the caller of a Lambda function receives a response might increase up to 10% when the agent layer is added in comparison to when the agent is not active/present.
The following table contains uncompressed layer sizes.
While not mandatory, we recommend that you set up Dynatrace Amazon CloudWatch integration. This allows data ingested via AWS integration to be seamlessly combined with the data collected by the Dynatrace AWS Lambda extension.
The Dynatrace AWS Lambda extension does not support the capture of method-level request attributes.
Most Dynatrace AWS Lambda extensions don't capture IP addresses of outgoing HTTP requests. This results in unmonitored hosts if the called service isn't monitored with Dynatrace.
Getting auth token from AWS Secrets Manager is not supported if Lambda SnapStart is enabled.
Incoming calls: Dynatrace can monitor incoming calls only if they are invoked via:
For other invocation types, OneAgent cannot capture any specific information and can also not connect the trace to any parent. Invocations via the AWS SDK need the client to be instrumented with Dynatrace in order to connect the trace.
Outgoing requests to another AWS Lambda function: In a monitored AWS Lambda function, the following libraries are supported for outgoing requests to another AWS Lambda function:
Outgoing HTTP requests: In a monitored AWS Lambda function, the following libraries/HTTP clients are supported for outgoing HTTP requests:
http.request
fetch API
(OneAgent version 1.285+)requests
, aiohttp-client
, urllib3
, redis-py
(OneAgent version 1.289+)Additional requirements for incoming calls for Java only: To correctly monitor the configured handler method
super.handleRequest(...)
).Context
(com.amazonaws.services.lambda.runtime.Context
) parameter.handleRequest
and configuring that as handler method.
However, as long as the previous requirements are fulfilled, the agent supports any valid handler function, even if not derived from that base interface.com.amazonaws.services.lambda.runtime.events
package are used by OneAgent to match the corresponding invocation types for incoming calls.Node.js sensors and instrumentations for ES modules:
The Node.js AWS Lambda extension sensors (instrumentations) don't support ECMAScript modules. This means that the extension won't properly monitor outgoing calls (for example, HTTP or AWS SDK requests).
OpenTelemetry instrumentations don't support ECMAScript modules by default.
There is a way to make OpenTelemetry instrumentations work with ECMAScript modules, but it's experimental and has some limitations. For details, Instrumentation for ES Modules In NodeJS (experimental).