Log Monitoring default limits (Logs Classic)
Log Monitoring Classic
The following page lists default limits for the latest version of Dynatrace Log Monitoring.
The below limitations apply to both log file ingestion and generic log ingestion via API:
Unsupported autodiscovery scenarios
Scenarios that are not supported in the rotated log autodiscovery process include:
- Rotated log generation with a directory change. This process could lead to the creation of numerous non-aggregated and/or incomplete logs, as well as to resource overuse.
- Rotated log generation with immediate compression, where the application addresses a file with the same name. If a rotation criterion is met (for example, the required file size is reached), the file is moved to another location and immediately compressed.
Example:
/var/log/application.log -> /var/log/application.log.1.gz -> /var/log/application.log.2.gz -> /var/log/application.log.3.gz
. This process might again lead to incomplete log creation.
Limits for your log autodiscovery when using OneAgent
Log files in OneAgent:
- cannot be deleted earlier than a minute after creation.
- must be appended (old content is not updated).
- must have text content.
- must be opened constantly (not just for short periods of adding log entries).
- must be opened in write mode.
Log rotation limits
Scenarios that are not supported in the rotated log monitoring process include:
Type
Description
Rotated log generation with a directory change
The potential consequence is the creation of duplicates and/or incomplete logs.
Rotated log generation with immediate compression
If a rotation criterion is met (for example, the required file size is reached), the file is moved to another location and immediately compressed. Example: /var/log/application.log -> /var/log/application.log.1.gz -> /var/log/application.log.2.gz -> /var/log/application.log.3.gz
. This process might again lead to incomplete log ingest. There should be at least one uncompressed rotated file.
Rotated log generation with queue logic
The oldest log records are removed whenever new content is added to a file, resulting in a relatively constant log file size. This scenario can be easily replaced with a supported rotation scheme by, for example, starting a new file when the current file reaches a predefined size.
Log record minimum timestamp
The earliest timestamp for a log record is:
Minimum timestamp
Description
The current time minus 24 hours for log records.
This applies to all log record sources (OneAgent and generic log ingestion). If the log record contains a timestamp earlier than the current time minus 24 hours, the log record is dropped.
The current time minus 1 hour for log metrics and events.
Data points for metrics from logs and events accept data for the current time minus 1 hour. Data points outside of this timeframe are dropped.
Timestamp earlier than the current time minus 24 hours.
If the log record contains such a timestamp, the record is dropped and the generic log ingestion API returns a 400
response code.
Log record maximum timestamp
The timestamp for a log record is not limited to future time. If the log record contains a timestamp more than 10 minutes in the future, the timestamp of the record is overridden by the current time on the server.
Ingest a payload size
- For Generic log ingest API: the maximum payload size of a single request is 5 MB. Additional limitations to fields and attributes are listed at Log Monitoring API - POST ingest logs.
- For OneAgent and Generic log ingest API: the length of a log record is limited to 8,192 characters. Any content exceeding the limit is trimmed (the rest of the log record is skipped). The ingest continues from the beginning of the next log record.
Number of log records in a payload
The maximum count of log records in a single upload is 50,000.
Log events per minute
Log data ingestion is limited by default to 1,000,000 log events per minute per tenant. If your log data stream within your tenant exceeds the limit, all log events above the limit are ignored. Upgrade to Log Management and Analytics powered by Grail to ingest higher amount of log events.
This limit applies to all log sources (log ingestion via OneAgent, log ingestion via API and log ingestion via extension). You can use self-monitoring metrics to get a count of incoming log events and log events rejected because this limit was exceeded. For details, see Self-monitoring metrics.
Maximum attributes
A log record can have up to 50 attributes. Additional attributes (as they appear in the JSON stream) are ignored.
High-cardinality attributes
Unique log data attributes (high-cardinality attributes) such as span_id
and trace_id
generate unnecessarily excessive facet lists that may impact log viewer performance. Because of this, they aren't listed in log viewer facets. You can still use them in a log viewer advanced search query.
Log ingestion API request objects
In addition to generic Dynatrace API limitations (Dynatrace API - Access limit) the following log ingestion API specific limits apply:
-
LogMessagePlain plain
text object.
The length of the message is limited to 8,192 characters. Any content exceeding the limit is trimmed. -
LogMessageJson
JSON object.
The object might contain the following types of keys (the possible key values are listed below):- Timestamp. The following formats are supported: UTC milliseconds, RFC3339, and RFC3164. If not set, the current timestamp is used.
- Severity. If not set, NONE is used.
- Content. If the content key is not set, the whole JSON is parsed as the content.
- Semantic attribute. Only values of the String type are supported. Semantic attributes are indexed and can be used in queries. These are also displayed in aggregations (facets). If an unsupported key occurs it is not indexed and cannot be used in indexing and aggregations.
The length of the value is limited. Any content exceeding the limit is trimmed. Default limits:
- Content: 8,192 characters.
- Semantic attribute value: 250 characters.
Sensitive data masking limits
Be aware of the following limitations to sensitive data masking:
- Sensitive data masking requires OneAgent version 1.243.
- Sensitive data masking in Log Monitoring v1 cannot be migrated to the latest Log Monitoring.
- If the masking process takes too much time, the log file affected is blocked until the restart of OneAgent or any configuration change, and then you get the
File not monitored - incorrect sensitive data masking rule
message.
Active Gate throughput
If you are using the SaaS endpoint, you don't have to worry about the Active Gate throughput. The throughput is the same as for Grail. If you use Environmental Active Gate, the throughput is 3.3GB/min with RTT <= 200 ms.