Log Management and Analytics default limits

  • 1min

This page lists default limits for the latest version of Dynatrace Log Management and Analytics. The current limitations apply to both log file ingestion and log ingestion via the Log ingestion API.

Log ingestion limits

The table below summarizes the most important default limits related to log ingest. All presented limits refer to UTF-8 encoded data.

Type
Limit
OpenPipeline Limit
Description
Content
65,536 bytes
10 MB1
The maximum size of log entry body
Attribute key
100 bytes
100 bytes
The key of an attribute value
Attribute value length
250 bytes
32 kB
The maximum length of an attribute value
Number of log attributes
50
500
The maximum number of attributes a log can contain
Log events per minute
No limit
No limit
The maximum number of log events in a minute
Log age
24 hours
24 hours
The maximum age of log entries when ingested
Logs with future dates
No restriction2
No restriction2
How far into the future log entries can reach
Values per attribute
32 values
32 values
The maximum number of individual values an attribute can contain
Request size
10 MB
10 MB
The maximum size of the payload data
Number of log records
50,000 records
50,000 records
The maximum number of log records per request
Nested objects
5 levels
5 levels
The maximum number of levels ingested with nested objects
Extracted log attribute
32 KB
4,096 bytes
When logs are added to the event template, log attributes are truncated to 4096 bytes
1

The content limit is lower (512 kB) for logs routed to the Classic pipeline.

2

There is no ingestion limitation on log entries with future timestamps, but entries with timestamps further than 10 minutes into the future have their timestamps set to the moment of ingestion.

Check your access to OpenPipeline in Log processing with OpenPipeline.

Log ingestion latency

Logs ingested via OneAgent are typically ready for analysis between a few seconds and 90 seconds (30 seconds on average).

Logs ingested by API are available for analysis in Dynatrace after 10 seconds on average.

Occasionally, a higher latency might occur by data loss prevention mechanisms like retransmissions, buffering, or other factors that can introduce delays.

Log record minimum timestamp

The earliest timestamp for a log record is:

Minimum timestamp

Description

The current time minus 24 hours for log records.

This applies to all log record sources (OneAgent and generic log ingestion). If the log record contains a timestamp earlier than the current time minus 24 hours, the log record is dropped.

The current time minus 1 hour for log metrics and events.

Data points for metrics from logs and events accept data for the current time minus 1 hour. Data points outside of this timeframe are dropped.

Timestamp earlier than the current time minus 24 hours.

If the log record contains such a timestamp, the record is dropped and the generic log ingestion API returns a 400 response code.

Log metrics

Number of metrics is limited to:

  • 100,000 (1000 per pipeline x 100 pipelines) for Log Management and Analytics powered by Grail with OpenPipeline
  • 1000 for Log Management and Analytics powered by Grail without enabled OpenPipeline
  • 50 in other cases.

Log ingestion API request objects

In addition to generic Dynatrace API limitations (Dynatrace API - Access limit) the following log ingestion API specific limits apply:

  • LogMessageJson JSON object.
    The object might contain the following types of keys (the possible key values are listed below):

    Type

    Description

    Timestamp

    The following formats are supported: UTC milliseconds, RFC3339, and RFC3164. If not set, the current timestamp is used.

    Severity

    If not set or not recognized, NONE is used.

    Content

    If the content key is not set, the whole JSON is parsed as the content.

    Attributes

    Only values of the string type are supported; numbers and boolean values will be converted to string. Semantic attributes are also displayed in attribute filters, suggested in query edit or when creating metrics or alerts.

  • LogMessageOTLP OpenTelemetry Protocol object. See Ingest OpenTelemetry logs.

Limits for your log autodiscovery when using OneAgent

Log files in OneAgent:

  • cannot be deleted earlier than a minute after creation.
  • must be appended (old content is not updated).
  • must have text content.
  • must be opened constantly (not just for short periods of adding log entries).
  • must be opened in write mode.

In standard environments, OneAgent log module supports up to 100 files in one directory with logs, 1 GB of initial log content (when OneAgent log module runs for the first time), and 10 MB of new log content per minute. If you have more data, especially a higher level of magnitude, there is a high chance OneAgent log module will support it as well, but we advise you to contact support to review your setup beforehand.

In special cases, such as very poor hardware performance, the OneAgent log module's limitations might be more strict.

Log rotation limits

Scenarios that are not supported in the rotated log monitoring process include:

Type

Description

Rotated log generation with a directory change

The potential consequence is the creation of duplicates and/or incomplete logs.

Rotated log generation with immediate compression

If a rotation criterion is met (for example, the required file size is reached), the file is moved to another location and immediately compressed. Example: /var/log/application.log -> /var/log/application.log.1.gz -> /var/log/application.log.2.gz -> /var/log/application.log.3.gz. This process might again lead to incomplete log ingest. There should be at least one uncompressed rotated file.

Rotated log generation with queue logic

The oldest log records are removed whenever new content is added to a file, resulting in a relatively constant log file size. This scenario can be easily replaced with a supported rotation scheme by, for example, starting a new file when the current file reaches a predefined size.

Sensitive data masking limits

Be aware of the following limitations to sensitive data masking:

  • If the masking process takes too much time, the log file affected is blocked until the restart of OneAgent or any configuration change, and then you get the File not monitored - incorrect sensitive data masking rule message.

Active Gate throughput

If you are using the SaaS endpoint, you don't have to worry about the Active Gate throughput. The throughput is the same as for Grail. If you use Environmental Active Gate, the throughput is 3.3GB/min with RTT <= 200 ms.

Related tags
Log AnalyticsLog Analytics