Troubleshooting Log Monitoring Classic
Log Monitoring Classic
How much does Log Monitoring cost?
Log Monitoring pricing is based on the Davis data units (DDUs) model. See DDUs for Log Monitoring Classic for details on how DDU consumption is calculated for Log Monitoring.
Ingested logs don't look as expected.
For example, the content is trimmed, the timestamp is corrected but the processing rule doesn't seem to work.
The log ingest pipeline consists of several stages where logs are processed, checked against product characteristics and limits. Particular log record contains warnings regarding issues that occurred in the log ingest and processing pipeline. Warnings are persisted and stored in dt.ingest.warings
attribute for each log record individually. See list and description of all possible log ingestion warnings.
I don't see my logs, or log events are missing in Dynatrace Managed.
If you don't have permissions for Cluster Management Console, contact your cluster administrator.
To see if your logs are ingested
- In the Dynatrace menu, go to Logs.
Check for arriving logs in the table.
If your logs are not listed
- Make sure that the the latest version of Dynatrace log monitoring is enabled for the environment.
- If Log Monitoring is enabled, in Cluster Management Console, go to Environments and select the affected environment.
- Make sure Maximum ingest of Log Events is greater than
0
.
I get Ingested log data is trimmed.
message in Dynatrace Managed.
You receive a message: Message: Ingested log data is trimmed.
- In Cluster Management Console, go to Events.
- Make sure Only show log events that have minimum severity level of is set to informational or warning.
- In the Search box, search for
Ingested log data is trimmed
. If a cluster event was created recently, check its time, affected environments, and log ingest limit.
- For each affected environment, go to Logs and set the global timeframe selector to the hour from the cluster event from the previous step.
- Inspect 1-minute intervals of log events ingest.
- If you see that log events are trimmed to the Maximum ingest of Log Events limit set for this environment, you need to increase it.
If log ingest was below the limit in subsequent intervals, your log entries will be re-ingested and should be available later, but you could consider increasing the limit to avoid a delay in data processing.
- To increase the ingest limit, in Cluster Management Console, find Maximum ingest of Log Events and edit the Max limit value for the selected environment.
We recommend that you increase the limit incrementally followed by verification of ingest volume.
- Make sure that your cluster still meets the hardware recommendations for the new ingest volume.
I get Log ingest queue is full.
message in Dynatrace Managed.
You receive a message: Message: Log ingest queue is full. Your Dynatrace deployment requires scaling
. Probably your node doesn't meet the hardware requirements described in Managed hardware requirements, so your logs can't be processed.
You can:
Add extra RAM to your node and restart it.
Lower the ingest limit.
I get Elasticsearch log queue is full.
message in Dynatrace Managed.
You receive a message: Elasticsearch log queue is full. Your Elasticsearch deployment requires scaling
.
You need to scale out your cluster by either:
Adding extra nodes to the Elasticsearch cluster
Increasing the number of processors a node can utilize
Who can increase the ingestion limit per tenant?
Contact a Dynatrace product expert via live chat within your Dynatrace environment.
What might prevent logs from appearing on the server?
-
Over 200 rotated log file groups are detected for a process.
Dynatrace detects a rotation scheme for log files and reports all the log files in the detected scheme as a group under one name, which typically maps to many files on disk. A large number of rotated file groups typically means that Dynatrace did not recognize the rotation pattern correctly and reports each physical file separately as a group. After a total of 200 reported rotated log file groups is reached, autodetection is turned off for this process. To resolve this issue, you can:
- Check your Log Monitoring configuration (see Log sources and storage (Logs Classic) and Add log files manually (Logs Classic)).
- Raise the limit in the OneAgent configuration,
FilesInGroup
property (see Log Monitoring configuration (Logs Classic)).
-
The files are growing very quickly.
When a log file grows very quickly (at a pace of over 10 MB/s), its content might be skipped. OneAgent will continue to send the log file as long as both the network and the server can handle the load. Note that 10 MB/s with typical compression is approximately 10 Mbps of upload traffic.
-
The file name or path doesn't match typical log naming.
OneAgent checks whether logs match a file name and path pattern that is typical for log files. If there is no match, the file is not reported and sent to the server. This is needed to avoid false positives on detection of files as logs, and to prevent pulling non-log data from hosts. To remedy this, you can set rules in the OneAgent configuration,
AutomaticFile
property (see Log Monitoring configuration (Logs Classic)). -
There are symbolic links in the file or the paths.
This limitation applies to custom files that point to a path that contains symbolic links. The physical path of the file pointed to by a symbolic link must meet the criteria for a log. Otherwise, symbolic links could be used to read non-log data from a host.
-
The file size is below
500
bytes.