Structure technology-specific logs

OpenPipeline offers pre-defined parsers to structure technology-specific logs according to the Dynatrace Semantic Dictionary. The parser library supports a broad range of technologies—including well-known data formats, popular third-party services, and cloud providers—for example, AWS Lambda, Python, Cassandra, and Apache Tomcat.

Who this is for

This article is intended for administrators and app users.

What you will learn

In this article, you will learn to structure technology-specific logs in OpenPipeline and analyze them in Notebooks.

Before you begin

Steps

  1. Go to OpenPipeline OpenPipeline app (new) > Logs > Pipelines.
  2. To create a new pipeline, select Pipeline and enter a name—for example, Syslog - Pipeline.
  3. To configure processing, go to Processing > Processor > Technology bundle and choose the Syslog bundle.
  4. Copy the matching condition.
  5. Select Save.

You successfully configured a new pipeline with a processor to structure syslog logs according to pre-defined rules that match Dynatrace Semantic Dictionary. The new pipeline is in the pipeline list.

  1. Go to OpenPipeline app (new) OpenPipeline > Logs > Dynamic routing.

  2. To create a new route, select Dynamic route and enter:

    • A descriptive name—for example, Syslog
    • The matching condition you copied
    • The pipeline containing the processing instructions (Syslog - Pipeline)
  3. Select Add.

  4. Make sure to place the new route in the correct position on the list. Routes are evaluated from top to bottom. Data is dynamically routed into a pipeline according to the first applicable matching condition. Routed data is not evaluated against any subsequent conditions.

  5. Select Save.

You successfully configured a new route. All syslog logs are routed to the pipeline for processing. The new route is in the route list.

Once logs are processed according to the technology bundle, several attributes are extracted from the log content into new fields that match Dynatrace Semantic Dictionary. You can easily filter logs by status, application, or attributes specific to the technology, as shown in the examples below.

  1. Go to Notebooks Notebooks and open a new or existing notebook.

  2. Select Add > DQL and enter one of the following queries

    • Fetch syslog warn logs

      fetch logs
      | filter dt.openpipeline.source == "extension:syslog"
      | filter status == "WARN"
      | sort timestamp desc
    • Group syslog logs by application

      fetch logs
      | filter dt.openpipeline.source == "extension:syslog" and isNotNull(syslog.appname)
      | summarize totalCount = count(), by: {syslog.appname}
      | sort totalCount desc
    • Sort applications by the percentage of syslog error logs

      fetch logs
      | filter dt.openpipeline.source == "extension:syslog" and isNotNull(syslog.appname)
      | summarize TotalCount = count(), Count = countIf(status == "ERROR"), by: {syslog.appname}
      | fieldsAdd Percentage = (Count * 100 / TotalCount)
      | sort Count desc
      | fieldsRemove TotalCount

Conclusion

You successfully structured syslog logs according to pre-defined processing rules in OpenPipeline. Incoming records that match the routing conditions are routed to the syslog pipeline, where new attributes specific to the syslog technology are extracted. The new attributes match Dynatrace Semantic Dictionary allowing for smooth analysis. You can filter syslog logs in Notebooks and get the most out of your structured logs.