Log Management and Analytics default limits

The following page lists default limits for the latest version of Dynatrace Log Management and Analytics.

The below limitations apply to both log file ingestion and generic log ingestion via API:

Ingest a payload size

  • For Generic log ingest API: the maximum payload size of a single request is 5 MB. Additional limitations to fields and attributes are listed at Log Monitoring API - POST ingest logs.
  • For Generic log ingest API and Agent: the length of a log record content field is 65,536 UTF-8 encoded bytes. Any content exceeding the limit is trimmed (the rest of the log content is skipped). The ingest continues from the beginning of the next log record.

Number of log records in a payload

The maximum count of log records in a single upload is 50,000.

Log record maximum timestamp

The timestamp for a log record is not limited to future time. If the log record contains a timestamp more than 10 minutes in the future, the timestamp of the record is overridden by the current time on the server.

Log ingestion latency

Logs ingested via OneAgent are typically ready for analysis between a few seconds and 90 seconds (30 seconds on average).

Logs ingested by API are available for analysis in Dynatrace after 10 seconds on average.

Occasionally, a higher latency might occur by data loss prevention mechanisms like retransmissions, buffering, or other factors that can introduce delays.

Log record minimum timestamp

The earliest timestamp for a log record is:

Minimum timestamp

Description

The current time minus 24 hours for log records.

This applies to all log record sources (OneAgent and generic log ingestion). If the log record contains a timestamp earlier than the current time minus 24 hours, the log record is dropped.

The current time minus 1 hour for log metrics and events.

Data points for metrics from logs and events accept data for the current time minus 1 hour. Data points outside of this timeframe are dropped.

Timestamp earlier than the current time minus 24 hours.

If the log record contains such a timestamp, the record is dropped and the generic log ingestion API returns a 400 response code.

Log size and attributes

The length of the value and number of attributes are limited. Any content exceeding the limit is trimmed. Default limits:

Type

Limit

OpenPipeline Limit

Content

65,536 UTF-8 encoded bytes

524,288 UTF-8 encoded bytes

Attribute key

100 UTF-8 encoded bytes

100 UTF-8 encoded bytes

Attribute value

250 UTF-8 encoded bytes

2500 UTF-8 encoded bytes

Number of log attributes

50

250

Check your access to OpenPipeline in Log processing with OpenPipeline.

Log metrics

Number of metrics is limited to 1000 for Log Management and Analytics powered by Grail with OpenPipeline and 50 in other cases.

Log ingestion API request objects

In addition to generic Dynatrace API limitations (Dynatrace API - Access limit) the following log ingestion API specific limits apply:

  • LogMessageJson JSON object.
    The object might contain the following types of keys (the possible key values are listed below):

    Type

    Description

    Timestamp

    The following formats are supported: UTC milliseconds, RFC3339, and RFC3164. If not set, the current timestamp is used.

    Severity

    If not set or not recognized, NONE is used.

    Content

    If the content key is not set, the whole JSON is parsed as the content.

    Attributes

    Only values of the string type are supported; numbers and boolean values will be converted to string. Semantic attributes are also displayed in attribute filters, suggested in query edit or when creating metrics or alerts.

  • LogMessageOTLP OpenTelemetry Protocol object. See OpentTelemetry ingest limits

Limits for your log autodiscovery when using OneAgent

Log files in OneAgent:

  • cannot be deleted earlier than a minute after creation.
  • must be appended (old content is not updated).
  • must have text content.
  • must be opened constantly (not just for short periods of adding log entries).
  • must be opened in write mode.

Log rotation limits

Scenarios that are not supported in the rotated log monitoring process include:

Type

Description

Rotated log generation with a directory change

The potential consequence is the creation of duplicates and/or incomplete logs.

Rotated log generation with immediate compression

If a rotation criterion is met (for example, the required file size is reached), the file is moved to another location and immediately compressed. Example: /var/log/application.log -> /var/log/application.log.1.gz -> /var/log/application.log.2.gz -> /var/log/application.log.3.gz. This process might again lead to incomplete log ingest. There should be at least one uncompressed rotated file.

Rotated log generation with queue logic

The oldest log records are removed whenever new content is added to a file, resulting in a relatively constant log file size. This scenario can be easily replaced with a supported rotation scheme by, for example, starting a new file when the current file reaches a predefined size.

Sensitive data masking limits

Be aware of the following limitations to sensitive data masking:

  • Sensitive data masking requires OneAgent version 1.243.
  • Sensitive data masking in Log Monitoring v1 cannot be migrated to the latest Log Monitoring.
  • If the masking process takes too much time, the log file affected is blocked until the restart of OneAgent or any configuration change, and then you get the File not monitored - incorrect sensitive data masking rule message.

Active Gate throughput

If you are using the SaaS endpoint, you don't have to worry about the Active Gate throughput. The throughput is the same as for Grail. If you use Environmental Active Gate, the throughput is 3.3GB/min with RTT <= 200 ms.