This page lists default limits for the latest version of Dynatrace Log Management and Analytics. The current limitations apply to both log file ingestion and log ingestion via the Log ingestion API.
The table below summarizes the most important default limits related to log ingest. All presented limits refer to UTF-8 encoded data.
Type | Limit | Description |
---|---|---|
Content | 10 MB1 | The maximum size of log entry body |
Attribute key | 100 bytes | The key of an attribute value |
Attribute value length | 32 kB | The maximum length of an attribute value |
Number of log attributes | 500 | The maximum number of attributes a log can contain |
Log events per minute | No limit | The maximum number of log events in a minute |
Log age | 24 hours | The maximum age of log entries when ingested |
Logs with future dates | No restriction2 | How far into the future log entries can reach |
Values per attribute | 32 values | The maximum number of individual values an attribute can contain |
Request size 3 | 10 MB | The maximum size of the payload data |
Number of log records | 50,000 records | The maximum number of log records per request |
Nested objects | 5 levels | The maximum number of levels ingested with nested objects |
Extracted log attribute | 4,096 bytes | When logs are added to the event template, log attributes are truncated to 4096 bytes |
The content limit is lower (512 kB) for logs routed to the Classic pipeline.
There is no ingestion limitation on log entries with future timestamps, but entries with timestamps further than 10 minutes into the future have their timestamps set to the moment of ingestion.
When it comes to request size, the Log Ingestion API endpoints accept requests up to 10 MB. However, after the initial processing, the batch may grow in size. If it exceeds 16 MB after processing, it will be rejected with the following 413 error: Message size limit exceeded after preprocessing on ingest endpoint
. To avoid this issue, ingest smaller batches of log records to stay within the size limits.
A log request may increase in size due to the following reasons:
Check your access to OpenPipeline in Log processing with OpenPipeline.
Logs ingested via OneAgent are typically ready for analysis between a few seconds and 90 seconds (30 seconds on average).
Logs ingested by API are available for analysis in Dynatrace after 10 seconds on average.
Occasionally, a higher latency might occur by data loss prevention mechanisms like retransmissions, buffering, or other factors that can introduce delays.
The following rules apply to all log event sources, such as OneAgent and the generic log ingestion API.
Log record timestamp
Description
The current time minus 24 hours for log records.
The event is dropped if the log event contains a timestamp before the current time minus 24 hours. If the record is ingested via the generic Log Ingestion API, it can return the following:
400
- if all log events in the payload have timestamps earlier than the current time minus 24 hours.
Message in response: All logs are out of correct time range.
200
- in case some of the events in the payload have timestamps earlier than the current time minus 24 hours.
Example message in response: 2 events were not ingested because of timestamp out of correct time range
.
204
- (No Content) in case of success.
The current time minus two hours for log metrics and events.
The data point is dropped if the log metric data point timestamp is before the current time minus two hours.
Current time plus ten minutes
The time stamp is reset to current time.
Number of metrics is limited to:
100,000
(1000
per pipeline x 100
pipelines) for Log Management and Analytics powered by Grail with OpenPipeline1000
for Log Management and Analytics powered by Grail without enabled OpenPipeline50
in other cases.In addition to generic Dynatrace API limitations (Dynatrace API - Access limit) the following log ingestion API specific limits apply:
LogMessageJson
JSON object.
The object might contain the following types of keys (the possible key values are listed below):
Type
Description
Timestamp
The following formats are supported: UTC milliseconds, RFC3339, and RFC3164. If not set, the current timestamp is used.
Severity
If not set or not recognized, NONE is used.
Content
If the content key is not set, the whole JSON is parsed as the content.
Attributes
Only values of the string type are supported; numbers and boolean values will be converted to string. Semantic attributes are also displayed in attribute filters, suggested in query edit or when creating metrics or alerts.
LogMessageOTLP
OpenTelemetry Protocol object. See Ingest OpenTelemetry logs.
Log files in OneAgent:
In standard environments, OneAgent log module supports up to 100 files in one directory with logs, 1 GB of initial log content (when OneAgent log module runs for the first time), and 10 MB of new log content per minute. If you have more data, especially a higher level of magnitude, there is a high chance OneAgent log module will support it as well, but we advise you to contact support to review your setup beforehand.
In special cases, such as very poor hardware performance, the OneAgent log module's limitations might be more strict.
Scenarios that are not supported in the rotated log monitoring process include:
Type
Description
Rotated log generation with a directory change
The potential consequence is the creation of duplicates and/or incomplete logs.
Rotated log generation with immediate compression
If a rotation criterion is met (for example, the required file size is reached), the file is moved to another location and immediately compressed. Example: /var/log/application.log -> /var/log/application.log.1.gz -> /var/log/application.log.2.gz -> /var/log/application.log.3.gz
. This process might again lead to incomplete log ingest. There should be at least one uncompressed rotated file.
Rotated log generation with queue logic
The oldest log records are removed whenever new content is added to a file, resulting in a relatively constant log file size. This scenario can be easily replaced with a supported rotation scheme by, for example, starting a new file when the current file reaches a predefined size.
Be aware of the following limitations to sensitive data masking:
File not monitored - incorrect sensitive data masking rule
message.If you are using the SaaS endpoint, you don't have to worry about the Active Gate throughput. The throughput is the same as for Grail. If you use Environmental Active Gate, the throughput is 3.3GB/min with RTT <= 200 ms.