Dynatrace can enrich your ingested log data with additional information that helps Dynatrace to recognize, correlate, and evaluate the data. Log enrichment results in a more refined analysis of your logs.
OneAgent version 1.239+
Automatically connecting log data to traces works for all log data, no matter how the log data was ingested by Dynatrace.
You can manually enrich logs for log data ingested by Dynatrace by defining a log pattern to include the dt.span_id
, dt.trace_id
, dt.trace_sampled
and dt.entity.process_group_instance
fields.
Log enrichment enables you to:
Supported frameworks for trace/span log context enrichment:
Automatic log enrichment is supported for error.logs
and access.logs
Automatic log enrichment is supported for error.logs
, but manual log enrichment is required for access.logs
Supported frameworks for trace/span unstructured log context enrichment:
There are two ways to enrich the log data that you send to Dynatrace:
You can enable log enrichment for a particular technology used to create log data and let Dynatrace automatically inject additional attributes into every log record received. This method is recommended for structured log data of known technologies.
Use Process group override to limit log enrichment to a specific process group or a process within a process group.
To enable log enrichment for a specific technology:
Log enrichment modifies your ingested log data and adds the following information to each detected log record:
dt.trace_id
dt.span_id
dt.entity.process_group_instance
For structured log data such as JSON, XML, and well-defined text log formats, Dynatrace adds an attribute field to the log record entry.
Log data in JSON format is enriched with additional <dt.trace_id>
, <dt.span_id>
, and dt.entity.process_group_instance
properties.
{"severity": "error","time": 1638957438023,"pid": 1,"hostname": "paymentservice-788946fdcd-42lgq","name": "paymentservice-charge","dt.trace_id": "d04b42bc9f4b6ecdbf6bc9f4b6ecdbc","dt.span_id": "9adc716eb808d428","dt.entity.process_group_instance": "PROCESS_GROUP_INSTANCE-27204EFED3D8466E","message": "Unsupported card type for cardNumber=************0454"}
Log data in XML format is enriched with additional <dt.trace_id>
, <dt.span_id>
, and <dt.entity.process_group_instance>
nodes.
<?xml version="1.0" encoding="windows-1252" standalone="no"?><record><date>2021-08-24T14:41:36.565218700Z</date><millis>1629816096565</millis><nanos>218700</nanos><sequence>0</sequence><logger>com.apm.testapp.logging.jul.XMLLoggingSample</logger><level>INFO</level><class>com.apm.testapp.logging.jul.BaseLoggingSample</class><method>info</method><thread>1</thread><message>Update completed successfully.</message><dt.trace_id>513fcd4e9b08792fcd4e9b08792</dt.trace_id><dt.span_id>125840e3125840e3</dt.span_id><dt.entity.process_group_instance>PROCESS_GROUP_INSTANCE-27204EFED3D8466E</dt.entity.process_group_instance></record>
Check if Dynatrace log enrichment has an impact on your existing log data pipeline before using automatic log enrichment on unstructured log data.
Unstructured log data is typically made of raw plain text that is sequentially ordered and is designed to be read by people. Dynatrace does not automatically enrich unstructured log data. Dynatrace is able to enrich unstructured log data, but appending additional information to log data may have an impact on third-party tools that consume that same log data.
Log data in raw text is enriched with an additional [!dt dt.trace_id=$trace_id, dt.span_id=$span_id, dt.entity.process_group_instance=$dt.entity.process_group_instance]
string (attributes and their value).
127.0.0.1 - [21/Oct/2021:10:33:28 +0200] GET /index.htm HTTP/1.1 404 597 [!dt dt.trace_id=aa764ee37ebaa764ee37eaa764ee37e,dt.span_id=b93ede8b93ede8, dt.entity.process_group_instance=PROCESS_GROUP_INSTANCE-27204EFED3D8466E]
OneAgent version 1.239+
You can manually enrich your Dynatrace ingested log data by defining a log pattern to include the dt.span_id
, dt.trace_id
, dt.trace_sampled
, and dt.entity.process_group_instance fields
. You can enable manual log enrichment for a specific technology by following the Log enrichment steps.
Be sure to follow these rules for the format of the enriched fields in an unstructured log:
[]
) with a !dt
prefix.[!dt dt.trace_id=$dt_trace_id,dt.span_id=$dt_span_id, dt.entity.process_group_instance=$dt.entity.process_group_instance]
\n
must be excluded from the enrichment definition.Suppose you want to manually enrich your NGINX log data with dt.trace_id
, dt.span_id
and dt.trace_sampled
. The NGINX configuration file contains numerous standard NGINX variables, your log format definition must be in the log_format
section. For example:
log_format custom '$remote_addr - [$time_local] $request $status $body_bytes_sent [!dt dt.trace_id=$dt_trace_id,dt.span_id=$dt_span_id,dt.trace_sampled=$dt_trace_sampled]';access_log logs/access.log custom;
The result will be an access.log
file containing the enriched log records:
127.0.0.1 - [22/Mar/2022:08:50:45 +0100] GET /index.htm HTTP/1.1 200 30 [!dt dt.trace_id=b9e5c9ec08be5fab5071d76f427be7da,dt.span_id=43c5bb9432593963,dt.trace_sampled=true]127.0.0.1 - [22/Mar/2022:08:50:45 +0100] GET /index.htm HTTP/1.1 200 30 [!dt dt.trace_id=01e52950b145d97bf22345e68c5e6c58,dt.span_id=de819d856eecb236,dt.trace_sampled=true]
For OneAgent version 1.237 and earlier, the NGINX variables used are different. For example:
log_format custom '$remote_addr - [$time_local] $request $status $body_bytes_sent [!dt dt.trace_id=$trace_id,dt.span_id=$span_id]'; access_log logs/access.log custom
The result will be an access.log
file containing the enriched log records:
127.0.0.1 - [21/Oct/2021:10:33:28 +0200] GET /index.htm HTTP/1.1 404 597 [!dt dt.trace_id=e1c0afeb0b8a91d7748139aa764ee37e,dt.span_id=e5e6748fab93ede8]127.0.0.1 - [21/Oct/2021:10:33:31 +0200] GET /index.html HTTP/1.1 200 1056 [!dt dt.trace_id=81fe7816ba6c38f7aa09aef3684cd941,dt.span_id=3bdacc466ae073cd]
If you use a logging framework and log formatter that allows custom log patterns, you can adapt the pattern in the log formatter and directly access the Dynatrace enrichment attributes.
In the Log4j PatternFormatter, you can specify a pattern like this to include Dynatrace enrichment information:
<PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} dt.trace_id=%X{dt.trace_id} dt.span_id=%X{dt.span_id} dt.entity.process_group_instance=%X{dt.entity.process_group_instance} - %msg%n"/>
Logback is a successor to the log4j project. Logstash Logback is an extension that provides logback encoders, layouts, and appenders to log in JSON and other formats supported by Jackson.
The following is an example of manual enrichment using the Logstash encoder. Note the additional mdc
property in the configuration file, where you can include MDC variables.
<appender name="COMPOSITEJSONENCODER" class="ch.qos.logback.core.FileAppender"><file>compositejsonencoder.log</file><encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder"><providers><timestamp><fieldName>timestamp</fieldName><timeZone>UTC</timeZone></timestamp><loggerName><fieldName>logger</fieldName></loggerName><logLevel><fieldName>level</fieldName></logLevel><threadName><fieldName>thread</fieldName></threadName><mdc><includeMdcKeyName>dt.span_id</includeMdcKeyName><includeMdcKeyName>dt.trace_id</includeMdcKeyName><includeMdcKeyName>dt.entity.host</includeMdcKeyName></mdc><stackTrace><fieldName>stackTrace</fieldName><!-- maxLength - limit the length of the stack trace --><throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter"><maxDepthPerThrowable>200</maxDepthPerThrowable><maxLength>14000</maxLength><rootCauseFirst>true</rootCauseFirst></throwableConverter></stackTrace><message /><throwableClassName><fieldName>exceptionClass</fieldName></throwableClassName></providers></encoder></appender>
In .NET Serilog, you can customize the output templates for text-based sinks, like console or file sinks, to include Dynatrace enrichment information.
{Timestamp:yyyy-MM-dd HH:mm:ss.fff zzz} [{Level:u3}] [!dt dt.trace_id={trace_id}, dt.span_id={span_id}, dt.trace_sampled={trace_sampled}] {Message:lj}{NewLine}{Exception}
You can enrich your logs using NGINX ingress with Kubernetes in two steps:
Execute the ingress-nginx on Kubernetes instrumentation instructions.
Add the command below to the configmap.yaml
file for NGINX ingress.
Adding the main-snippet
line enables OneAgent ingestion and is optional if you have followed the manual instrumentation instructions already.
main-snippet: load_module /opt/dynatrace/oneagent/agent/bin/current/linux-musl-x86-64/liboneagentnginx.so;log-format-upstream: '$remote_addr - $remote_user [$time_local] "$request" [!dt dt.trace_id=$dt_trace_id,dt.span_id=$dt_span_id,dt.trace_sampled=$dt_trace_sampled] $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length'
apiVersion: v1kind: Namespacemetadata:name: prod-ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginx---# Source: ingress-nginx/templates/controller-serviceaccount.yamlapiVersion: v1kind: ServiceAccountmetadata:labels:helm.sh/chart: ingress-nginx-4.0.6app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.0.4app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: controllername: ingress-nginxnamespace: prod-ingress-nginxautomountServiceAccountToken: true---# Source: ingress-nginx/templates/controller-configmap.yamlapiVersion: v1kind: ConfigMapmetadata:labels:helm.sh/chart: ingress-nginx-4.0.6app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/version: 1.0.4app.kubernetes.io/managed-by: Helmapp.kubernetes.io/component: controllername: ingress-nginx-controllernamespace: prod-ingress-nginxdata:allow-snippet-annotations: 'true'main-snippet: load_module /opt/dynatrace/oneagent/agent/bin/current/linux-musl-x86-64/liboneagentnginx.so;log-format-upstream: '$remote_addr - $remote_user [$time_local] "$request" [!dt dt.trace_id=$dt_trace_id,dt.span_id=$dt_span_id,dt.trace_sampled=$dt_trace_sampled] $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length'...
To have Dynatrace match logs to corresponding traces, you can include the span and trace IDs in your log messages, using the [!dt]
notation.
The following examples show how to obtain the span and trace IDs with OpenTelemetry or the OneAgent SDK:
For details on configuration, see AWS Lambda logs in context of traces.
For instructions on how to source these attributes via OneAgent SDK:
You can get the dt.entity.process_group_instance
field using the OpenTelemetry Python command containing merged
. The process_group_instance
is retrieved as one of the attributes delivered in merged
, as shown in the example below:
With OneAgent, you can simply point to a local endpoint without an authentication token to enable trace ingestion.
import jsonfrom opentelemetry import trace as OpenTelemetryfrom opentelemetry.exporter.otlp.proto.http.trace_exporter import (OTLPSpanExporter,)from opentelemetry.sdk.resources import Resourcefrom opentelemetry.sdk.trace import TracerProvider, samplingfrom opentelemetry.sdk.trace.export import (BatchSpanProcessor,)merged = dict()for name in ["dt_metadata_e617c525669e072eebe3d0f08212e8f2.json", "/var/lib/dynatrace/enrichment/dt_metadata.json"]:try:data = ''with open(name) as f:data = json.load(f if name.startswith("/var") else open(f.read()))merged.update(data)except:passmerged.update({"service.name": "python-quickstart", #TODO Replace with the name of your application"service.version": "1.0.1", #TODO Replace with the version of your application})resource = Resource.create(merged)tracer_provider = TracerProvider(sampler=sampling.ALWAYS_ON, resource=resource)OpenTelemetry.set_tracer_provider(tracer_provider)tracer_provider.add_span_processor(BatchSpanProcessor(OTLPSpanExporter(endpoint="http://localhost:14499/otlp/v1/traces")))
When using OneAgent, make sure to enable the public Extension Execution Controller in your Dynatrace Settings, otherwise no data will be sent.
Go to Settings > Preferences > Extension Execution Controller. The toggles Enable Extension Execution Controller and Enable local PIPE/HTTP metric and Log Ingest API should be active.
For details on configuration, see Instrument your Python application with OpenTelemetry
If you use a custom winston formatter/transport (applicable to Node.js only), you need to manually add your injected dt.traceId
and dt.spanId
as in the example below:
const winston = require("winston");const Transport = require("winston-transport");class CustomTransport extends Transport {log(info, next) {let myLogLine = `MyLogLine: ${info.timestamp} level=${info.level}: ${info.message}`;// this is important as above line only picks timestamp, level and message but nothing else from metadataif (info["dt.trace_id"]) {myLogLine = `[!dt dt.trace_id=${info["dt.trace_id"]},dt.span_id=${info["dt.span_id"]},dt.trace_sampled=${info["dt.trace_sampled"]}] ${myLogLine}`;}console.log(myLogLine);next();}}const logger = winston.createLogger({level: "info",format: winston.format.timestamp(),transports: [new CustomTransport(),// this transport includes all metadata (including dynatrace added traceId,..)new winston.transport.Console({format: winston.format.simple()})]})