Log Monitoring Classic
For the newest Dynatrace version, see Log ingestion API.
The Log ingestion API ingests logs in JSON, TXT, and OTLP formats. On this page, we will describe the JSON and text formats. For OTLP documentation, refer to the OTLP formats.
Ingest endpoint will collect and attempt to automatically transform any log data containing the following JSON elements:
For details regarding limitations refer to Log Monitoring default limits (Logs Classic).
The Log ingestion API collects and attempts to automatically transform log data. Each log record from the ingested batch is mapped to a single Dynatrace log record, which contains three special attributes: timestamp, loglevel, content, and key-value attributes. These four properties are set based on keys present in the input JSON object as follows.
timestamp is set based on the value of the first found key from the following list, evaluated in the order presented in the list: timestamp, @timestamp, _timestamp, eventtime, date, published_date, syslog.timestamp, time, epochSecond, startTime, datetime, ts, timeMillis, @t.
Supported formats are: UTC milliseconds, RFC3339, and RFC3164.
For unsupported timestamp formats, the current timestamp is used, and the value of the unsupported format is stored in the unparsed_timestamp attribute.
Log records older than the log age limit are discarded. Timestamps more than 10 minutes ahead of the current time are replaced with the current time.
If there is no supported timestamp key in the log record, the default value is the current timestamp.
If there is no timezone in the timestamp, the default timezone is UTC.
loglevel is set based on the value of the first found key from the following list, evaluated in the order presented in the list: loglevel, status, severity, level, syslog.severity.
The default value is NONE.
content is set based on the value of the first found key from the following list, evaluated in the order presented in the list: content, message, payload, body, log.
The default value is an empty string.
Log attributes contain all other keys from the input JSON object except those used for timestamp, loglevel, and content.
First-level attributes should preferably map to semantic attributes for Dynatrace to map them to context. See Semantic attributes (Logs Classic) for more details.
All attribute keys are lowercased and all attribute values are stringified. Custom attributes and semantic attributes can generally be used in queries.
Automatic attribute. The dt.auth.origin attribute is automatically added to every log record ingested via API. This attribute is the public part of the API key that the log source authorizes to connect to the generic log ingest API.
Nested objects in your log attributes are transformed into flat value pairs.
When a log attribute contains an object, each nested property becomes a separate attribute. This process works for attributes up to level five, while attributes beyond that level are skipped.
Array types are preserved as arrays but the contained types are unified to a single type.
Complex values (such as arrays or objects) are mapped to JSON string values.
If any value in the array is a string, or if any value must be converted to a string (e.g., an object or array), the target type of the entire array is string.
If all values in the source array are numeric, the target array type is numeric.
Null values are considered compatible with any type.
Input
Log ingestion API endpoint output
{"content": "Transaction successfully processed.","transaction": {"id": "TXN12345","amount": 250.75},"auditTrail": ["Created","Approved",3]}
{"content": "Transaction successfully processed.","transaction.id": "TXN12345","transaction.amount": 250.75,"auditTrail": ["Created", "Approved", "3"]}
When attributes are saved in a flattened fashion on the Dynatrace side, there may be name collisions if attributes on different levels share the same name. Dynatrace resolves this by prefixing duplicate attributes with overwritten[COUNTER].. The counter value indicates how many times the attribute name has been already encountered as a duplicate. For example:
Input
Log ingestion API endpoint output
{"host.name": "abc","host": {"name": "xyz"}}
{"host.name": "abc","overwritten1.host.name": "xyz"}
Input
Log ingestion API endpoint output
{"service.instance.id": "abc","service": {"instance.id": "xyz", "instance": {"id": "123"}}}
{"service.instance.id": "abc","overwritten1.service.instance.id": "xyz","overwritten2.service.instance.id": "123"}
The rules below define how the content field is selected and constructed.
In case no supported content attribute is found, the whole JSON representation of the log event is set as the content field of the output log record. The original JSON is preserved as-is.
Escaping in output examples is for visualization purposes only. \" is billed as one character.
Input
Log ingestion API endpoint output
{"transaction": {"id": "TXN12345","amount": 250.75}}
{"content": "{\"transaction\":{\"id\":\"TXN12345\",\"amount\":250.75}}","transaction": {"id": "TXN12345","amount": 250.75}}
Any attribute that is an object, including content, is treated as a standard attribute.
Input
Log ingestion API endpoint output
{"payload": "This will be used for content.","message": {"id": "TXN12345","amount": 250.75}}
{"content": "This will be used for content.","message.id": "TXN12345","message.amount": 250.75}