Log data storage v1

Legacy Log Monitoring v1

You are viewing documentation for Log Monitoring v1. Dynatrace Log Monitoring v1, is considered a legacy solution.

Log Monitoring v1 will reach the end of life and will be switched off in November 2023.

SaaS environments will be automatically upgraded to LMA or LMC.

We strongly encourage you to switch to the latest Dynatrace Log Monitoring version.

If you are currently using Dynatrace SaaS, upgrade to the latest version of Dynatrace log monitoring.

Log Monitoring enables you to store all logs centrally within external storage. This makes log data available independent of log files themselves. This can be beneficial in the following situations:

  • Short log retention periods
  • Volatile log storage
  • Legal requirements for keeping logs archived centrally for long time periods

In addition you can also:

  • Analyze multiple logs simultaneously
  • Parse log or JSON files
  • Generate metrics from log content

Log storage requirements and costs

Dynatrace SaaS

For Dynatrace SaaS customers, log files are stored in the same AWS availability zone where your Dynatrace environment resides. You don’t have to worry about storage performance, availability, or free space. Disk storage costs are included in your Log Monitoring subscription. Costs are based on the average size of your cloud-based log storage, including the amount of streamed log data and the defined retention period. For details, see DDUs for Log Monitoring and Data retention periods.

Dynatrace Managed

To store log files centrally on your Dynatrace Managed cluster, you must provide a common Network File System (NFS) mount point (path) that is identical throughout the cluster and available from all cluster nodes. With this approach, it's your responsibility to ensure appropriate levels of performance, availability, and free space on the mounted NFS volume. Costs are calculated based only on the amount of ingress log data (GB/day), not total storage size, so retention time doesn't influence storage costs. For deployments on AWS, we recommend that you use the Amazon Elastic File System for your log storage.

Before configuring the path for log storage, you can check your DDU consumption by going to Account Management > License / Subscription > Overview. The Davis data units (DDUs) model counts all incoming events from your log data. Each log event (log line, message, event, etc.) deducts 0.0005 DDU from your available quota. 1 GiB ingest is equivalent to 1 million log events. See DDUs for Log Monitoring.

To set up your central location for log storage:

  1. Go to Settings > Log Monitoring.

  2. Set the Use network attached storage switch to the On position.

  3. Click the edit button and type the mount point (for example, /usr/local/path/to/storage) to the network resource.

    Make sure that all Dynatrace cluster nodes have write access to the mount point you indicated.

  4. Restart the Dynatrace cluster using the Restart button for each of the nodes in the cluster.

  5. For each monitored environment, in the Set total environment quotas section, set the Log monitoring storage to a non-zero value (default 0 MB).

Infrastructure impact

The log writing queue can take up to 1% of available memory on a cluster node. The CPU is typically not affected much. For metrics, a single node can process about two million log entries per second.

Because of the volume of logs, log files are compressed using Zstandard before they're sent for analysis. This task requires an average of 10% of compute power. Worst case, this task will require about 25% of compute power.

If you configured Log Monitoring to store all logs centrally, all log content needs to be read. If the log files are on NFS, each byte written to the log file also needs to be read. As a result, Log Monitoring reads from NFS at the same rate the log files are written. Depending on your network infrastructure, this may impact your network throughput, as a higher rate of log data generation will cause higher network utilization.

For the ActiveGate component, provide one physical core per 50 Mbps of traffic (compressed content).