The Log ingestion API ingests logs in JSON, TXT, and OTLP formats. On this page, we will describe the JSON and text formats. For OTLP documentation, refer to the OTLP formats.
The Log ingestion API is responsible for collecting the data and forwarding it to Dynatrace in batches.
https://{your-environment-id}.live.dynatrace.com/api/v2/logs/ingest.
The Log ingestion API endpoint is available in your Dynatrace environment.https://{your-activegate-domain}:9999/e/{your-environment-id}/api/v2/logs/ingest.
The Log ingestion API is automatically enabled after you install an ActiveGateFor details regarding supported payloads, authentication, parameters, and body objects, refer to Log Monitoring API v2 - POST ingest logs.
For details regarding limitations, refer to Log Management and Analytics default limits.
The Log ingestion API collects and attempts to automatically transform log data. Each log record from the ingested batch is mapped to a single Dynatrace log record, which contains three special attributes: timestamp, loglevel, content, and key-value attributes. These four properties are set based on keys present in the input JSON object as follows.
timestamp is set based on the value of the first found key from the following list, evaluated in the order presented here, and is case-insensitive: timestamp, @timestamp, _timestamp, eventtime, date, published_date, syslog.timestamp, time, epochSecond, startTime, datetime, ts, timeMillis, @t.
Supported formats are: UTC milliseconds, RFC3339, and RFC3164.
For unsupported timestamp formats, the current timestamp is used, and the value of the unsupported format is stored in the unparsed_timestamp attribute.
Log records older than the log age limit are discarded. Timestamps more than 10 minutes ahead of the current time are replaced with the current time.
If there is no supported timestamp key in the log record, the default value is the current timestamp.
If there is no timezone in the timestamp, the default timezone is UTC.
loglevel is set based on the value of the first found key from the following list, evaluated in the order presented here, and is case-insensitive: loglevel, status, severity, level, syslog.severity.
The default value is NONE.
content is set based on the value of the first found key from the following list, evaluated in the order presented here, and is case-insensitive: content, message, payload, body, log, _raw (supported only in the raw data model).
The default value and handling depends on the data model used for processing the input.
Log attributes contain all other keys from the input JSON object except those used for timestamp, loglevel, and content.
First-level attributes should preferably map to semantic attributes for Dynatrace to map them to context. All attributes can be used in queries, though Semantic Dictionary helps Davis AI in the interpretation of the logs. See Semantic Dictionary for more details.
Automatic attribute. The dt.auth.origin attribute is automatically added to every log record ingested via API. This attribute is the public part of the API key that the log source authorizes to connect to the generic log ingest API.
Attribute processing differs depending on tenant and environment type:
Logs on Grail with OpenPipeline custom processing (Dynatrace SaaS version 1.295+, Environment ActiveGate version 1.295+): Supports rich data types, enabling the use of diverse attributes in queries. Keys are case-sensitive.
Logs on Grail with OpenPipeline routed to Classic Pipeline: All attribute keys are lowercased and all attribute values are stringified. All attributes can be used in queries.
There are two data models that identify how structured logs are processed by log ingestion endpoints: raw and flattened. The difference between the two is in how attributes with object values are transformed.
If this configuration option is not specified, the default behavior depends on when your environment was created.
Escaping in output examples is for visualization purposes only. \" is billed as one character.
The raw data model preserves the original log structure and context, maintaining data integrity. This results in easy interaction and querying, because log record representation in Dynatrace remains the same as in the source.
We recommend using this approach for highly nested JSON logs, as it maintains the semantic meaning and relationships between data points.
The raw data model transforms the content of structured logs as described in the sections below.
Object attribute types are preserved as JSON strings. Further Dynatrace ingest stages (OpenPipeline, Logs app) support this format for easy log processing and analysis.
Array types are preserved as arrays but the contained types are unified to a single type.
Complex values (such as arrays or objects) are mapped to JSON string values.
If any value in the array is a string, or if any value must be converted to a string (e.g., an object or array), the target type of the entire array is string.
If all values in the source array are numeric, the target array type is numeric.
Null values are considered compatible with any type.
Input
Log ingestion API endpoint output
{"content": "Transaction successfully processed.","transaction": {"id": "TXN12345","amount": 250.75},"auditTrail": ["Created","Approved",3]}
{"content": "Transaction successfully processed.","transaction" : "{\"id\": \"TXN12345\", \"amount\": 250.75}","auditTrail": ["Created", "Approved", "3"]}
The selected input content field is always selected regardless of its type, and is converted to string type if necessary.
Input
Log ingestion API endpoint output
{"content": {"id": "TXN12345","amount": 250.75},"auditTrail": ["Created","Approved",3]}
{"content" : "{\"id\": \"TXN12345\", \"amount\": 250.75}","auditTrail": ["Created", "Approved", "3"]}
An example of a supported content attribute with an array value is given below.
Input
Log ingestion API endpoint output
{"transaction": {"id": "TXN12345","amount": 250.75},"content": ["Created","Approved",3]}
{"content": "[\"Created\", \"Approved\", 3]","transaction" : "{\"id\": \"TXN12345\", \"amount\": 250.75}"}
If no attribute from the supported content attributes is present in the input, the target content attribute is set to an empty string.
Input
Log ingestion API endpoint output
{"transaction": {"id": "TXN12345"},"auditTrail": ["Created","Approved",3]}
{"content": "","transaction" : "{\"id\": \"TXN12345\", \"amount\": 250.75}","auditTrail": ["Created", "Approved", "3"]}
The first attribute from the supported content attributes list is selected for the output content field.
Input
Log ingestion API endpoint output
{"message": {"id": "TXN12345","amount": 250.75},"payload": "Transaction","_raw": "Operation"}
{"content": "{\"id\": \"TXN12345\", \"amount\": 250.75}","payload" : "Transaction","_raw": "Operation"}
The _raw attribute is used as content only if no higher-priority supported content attribute is present.
Input
Log ingestion API endpoint output
{"_raw": {"id": "TXN12345","amount": 250.75},"auditTrail": ["Created","Approved",3]}
{"content": "{\"id\": \"TXN12345\", \"amount\": 250.75}","auditTrail": ["Created", "Approved", "3"]}
The flattened data model provides direct access to attribute values through simple key paths.
This approach is provided for compatibility reasons. It might also suit specific use cases, for example, when all nested JSON values need to be available at the root level.
In the flattened data model, nested objects in your log attributes are transformed into flat value pairs.
When a log attribute contains an object, each nested property becomes a separate attribute. This process works for attributes up to level five, while attributes beyond that level are skipped.
Array types are preserved as arrays but the contained types are unified to a single type.
Complex values (such as arrays or objects) are mapped to JSON string values.
If any value in the array is a string, or if any value must be converted to a string (e.g., an object or array), the target type of the entire array is string.
If all values in the source array are numeric, the target array type is numeric.
Null values are considered compatible with any type.
Input
Log ingestion API endpoint output
{"content": "Transaction successfully processed.","transaction": {"id": "TXN12345","amount": 250.75},"auditTrail": ["Created","Approved",3]}
{"content": "Transaction successfully processed.","transaction.id": "TXN12345","transaction.amount": 250.75,"auditTrail": ["Created", "Approved", "3"]}
When attributes are saved in a flattened fashion on the Dynatrace side, there may be name collisions if attributes on different levels share the same name. Dynatrace resolves this by prefixing duplicate attributes with overwritten[COUNTER].. The counter value indicates how many times the attribute name has been already encountered as a duplicate. For example:
Input
Log ingestion API endpoint output
{"host.name": "abc","host": {"name": "xyz"}}
{"host.name": "abc","overwritten1.host.name": "xyz"}
Input
Log ingestion API endpoint output
{"service.instance.id": "abc","service": {"instance.id": "xyz", "instance": {"id": "123"}}}
{"service.instance.id": "abc","overwritten1.service.instance.id": "xyz","overwritten2.service.instance.id": "123"}
The rules below define how the content field is selected and constructed.
In case no supported content attribute is found, the whole JSON representation of the log event is set as the content field of the output log record. The original JSON is preserved as-is.
The _raw field is not among the supported content fields for this data model.
Input
Log ingestion API endpoint output
{"transaction": {"id": "TXN12345","amount": 250.75}}
{"content": "{\"transaction\":{\"id\":\"TXN12345\",\"amount\":250.75}}","transaction": {"id": "TXN12345","amount": 250.75}}
Any attribute that is an object, including content, is treated as a standard attribute.
Input
Log ingestion API endpoint output
{"payload": "This will be used for content.","message": {"id": "TXN12345","amount": 250.75}}
{"content": "This will be used for content.","message.id": "TXN12345","message.amount": 250.75}
The Log Ingestion API additionally accepts log attributes through:
X-Dynatrace-AttrThese attributes are merged with those provided in the log record body according to the rules described below.
Request URL
Resulting attributes
POST /api/v2/logs/ingest?env=prod&env=blue&team=payments
{"content": "Transaction successfully processed."}
{"content": "Transaction successfully processed.","env": ["prod", "blue"],"team": "payments"}
The API supports a special header for passing additional attributes:
X-Dynatrace-Attr: region=eu-central-1&team=core
Rules:
When attributes appear in multiple places, the Log ingestion API applies attribute precedence while still preserving body values for auditability. The attributes are applied in the following order:
When attributes from query parameters or the header override body attributes:
overwrittenN.<attribute_key>.
Where N is an incrementing integer (1, 2, …) depending on how many body-originating values had to be preserved. This ensures uniqueness even when multiple conflicts occur.overwrittenN.* keys. Attributes overridden by higher-precedence sources do not generate overwritten copies.Request
Resulting attributes
Query: POST /api/v2/logs/ingest?team=frontend
Body:
{"content": "Transaction successfully processed.","team": "backend"}
{"content": "Transaction successfully processed.","team": "frontend","overwritten1.team": "backend"}
Attributes provided through query parameters or headers are included in billing calculations.
For multi-value attributes, the attribute key contributes to billing only once, regardless of how many values are present.