Stream Kubernetes logs with OneAgent Log Module

Dynatrace provides integrated Log management and analytics for your Kubernetes environments by either running the OneAgent Log module or integrating with log collectors such as Fluent Bit, Dynatrace OpenTelemetry Collector, Logstash, or Fluentd.

On this page you learn how you can use the OneAgent Log module to ingest logs from Kubernetes.

Deployment options for Kubernetes log monitoring

Dynatrace provides a flexible approach to Kubernetes observability where you can pick and choose the level of observability you need for your Kubernetes clusters. The Dynatrace Operator manages all the necessary components to get the data into Dynatrace for you. This also applies to collecting logs from Kubernetes containers. Depending on the selected observability option, the Dynatrace Operator configures and manages the Log module to work with or without a OneAgent on the node.

Log monitoring value

Kubernetes platform monitoring (optional: + Application observability)

Kubernetes platform monitoring + Full-Stack observability

Auto discovery of container logs

Applicable

Applicable

Control ingest via annotations and labels

Applicable

Applicable

Log enrichment with Kubernetes metadata

Applicable

Applicable

Logs in context of traces

Applicable

Log enrichment with process context

Applicable

Report logs to different Dynatrace environments

Applicable

Dynatrace Operator for managing the rollout and lifecycle

Applicable

Applicable

Log module integrates with OneAgent on node

Applicable

1

For pods with Application observability enabled

Prerequisites

  • Dynatrace version 1.310+

  • Dynatrace Operator version 1.4.2+

  • Dynatrace OneAgent version 1.309+

  • The Collect all container logs feature flag to be enabled

  • See supported Kubernetes/OpenShift platform versions and distributions.

    The OneAgent Log Module is not yet supported on GKE Autopilot clusters.

  • The OneAgent Log module reads logs from containerd and CRI-O containers. Other container runtimes aren't supported.

Auto-discovery of Kubernetes container logs

OneAgent Log module automatically discovers logs written to the stdout/stderr streams through containerized applications running in pods. Under the covers, these log streams are stored as files on the Kubernetes nodes, where the OneAgent Log module can pick those files up and stream them to Dynatrace. The log source attribute for these logs in Dynatrace is set to Container Output. The attribute log.iostream defines the log stream the log entries were written to, for example, stdout or stderr.

OneAgent Log module does not discover logs written to the container filesystem (as opposed to stdout/stderr). In this case, you can use a log shipper to read the logs from the container filesystem and write them to stdout/stderr for the OneAgent Log module to pick them up.

Log enrichment with Kubernetes metadata

OneAgent Log module decorates the ingested logs with the following Kubernetes metadata: k8s.cluster.name, k8s.cluster.uid, k8s.namespace.name, k8s.workload.name, k8s.workload.kind, dt.entity.kubernetes_cluster, k8s.pod.name, k8s.pod.uid, k8s.container.name, dt.entity.kubernetes_node.

See metadata enrichment for Kubernetes to learn more.

Control log ingest with Kubernetes metadata

You can control logs from Kubernetes ingestion with log ingest rules in Dynatrace. You can configure these rules at the Kubernetes cluster level to allow cluster-specific log ingestion. The rules use matchers for Kubernetes metadata and other common log entry attributes to determine which logs are to be ingested. Standard log processing features from OneAgent, including sensitive data masking, timestamp configuration, log boundary definition, and automatic enrichment of log records, are also available and enabled here.

Use the following recommended matching attributes when configuring log ingestion from Kubernetes.

Attribute

Description

Search dropdown logic

Kubernetes namespace name

Matching is based on the name of the Kubernetes namespace.

Attributes visible in the last 90 days are listed.

Kubernetes container name

Matching is based on the name of the Kubernetes container.

Attributes visible in the last 90 days are listed.

Kubernetes deployment name

Matching is based on the name of the Kubernetes workload.1

Attributes visible in the last 90 days are listed.

Kubernetes pod annotation

Matching is based on any of the selected pod annotations. The correct format is key=value.

Can be entered manually.

Kubernetes pod label

Matching is based on any of the selected pod labels. The correct format is key=value.

Can be entered manually.

Kubernetes workload name

Matching is based on any of the selected workload names.

Can be entered manually.

Kubernetes workload kind

Matching is based on any of the selected workload kinds.

Can be entered manually.

Log content

Matching is based on the content of the log; wildcards are supported in the form of an asterisk.

Can be entered manually. No time limit.

Log record level2

Matching is based on the level of the log record. It supports the following values: alert, critical, debug, emergency, error, info, none, notice, severe, warn.

Can be entered manually. No time limit.

Log source origin

Matching is based on the detector was used by OneAgent to discover the log file.

Can be entered manually. No time limit.

Process group

Matching is based on the process group ID. It also requires running a OneAgent on the node.

Entities visible in the last 3 days are listed.

Process technology

Matching is based on the technology name. It also requires running a OneAgent on the node.

Can be entered manually. No time limit.

DT entity container group ID

Matching is based on any of the selected container groups. It also requires running a OneAgent on the node.

Can be entered manually. No time limit.

1

Subject to change in the future versions of OneAgent. Separate matchers for each workload kind would be available. We recommend using the Kubernetes workload name and Kubernetes workload kind instead.

2

Log record level attribute, transformed by OneAgent Log Module, is different than the log status attribute transformed by the Dynatrace server. Learn more by accessing the Automatic log enrichment page.

Log ingest rule hierarchy

Log ingest rules can be defined on environment scope but also on more fine-grained level like Kubernetes cluster. The matching hierarchy is as follows:

  1. Host configuration rules;
  2. Kubernetes cluster configuration rules;
  3. Host group configuration rules;
  4. Environment configuration rules.

Matching occurs in a predefined hierarchy and rules are executed from top to bottom. This means that if a rule above on the list matches certain log data, then the lower ones will be omitted. Items matched in the higher-level configurations are overwritten in the lower-level configurations if they match the same log data. If no rule is matched, the file is not sent.

Consult the Configuration scopes for the three scopes of the configuration hierarchy.

Use cases

Explore the following use cases for log ingestion from Kubernetes environments using Dynatrace. By configuring log ingestion with different matchers, you can control which logs are captured in the system. The use cases below offer guidance on configuring Dynatrace to capture logs based on your specific monitoring needs, whether it's from a particular namespace, container, or other criteria.

For detailed instructions on how to configure log ingestion, see Log ingest rules.

Ingest all logs from a specific namespace

  1. Go to Settings and select Log Monitoring > Log ingest rules.
  2. Select Add rule and provide the name for your configuration in the Rule name field.
    Make sure that the Include in storage button is turned on, so logs matching this configuration will be stored in Dynatrace.
  3. Select Add condition.
  4. From the Matcher attribute dropdown, select Kubernetes namespace name.
  5. Select the namespace from the dropdown inside the Value field, and select Add matcher.
  6. Select Save changes.

You can now analyze the logs in the log viewer or notebooks after fitering the proper namespace. You can also find the logs in context in the Kubernetes application by selecting the Logs tab.

Ingest logs from a specific namespace and container

  1. Go to Settings and select Log Monitoring > Log ingest rules.
  2. Select Add rule and provide the name for your configuration in the Rule name field.
    Make sure that the Include in storage button is turned on, so logs matching this configuration will be stored in Dynatrace.
  3. Select Add condition.
  4. From the Matcher attribute dropdown, select Kubernetes namespace name.
  5. Select the namespace from the dropdown inside the Value field, and select Add matcher.
  6. Add a new matcher, this time, select K8s container name, and input the container name in the Value field. You can add multiple container names in this configuration step.
  7. Select Save changes.

You can now analyze the logs in the log viewer or notebooks after fitering the proper namespace and container. You can also find the logs in context in the Kubernetes application by selecting the Logs tab.

Ingest all Kubernetes logs excluding specific namespaces

  1. Go to Settings and select Log Monitoring > Log ingest rules.
  2. Select Add rule and provide the name for your configuration in the Rule name field.
    Make sure that the Include in storage button is turned on, so logs matching this configuration will be stored in Dynatrace.
  3. Select Add condition.
  4. From the Matcher attribute dropdown, select Kubernetes namespace name.
  5. Insert asterisk (*) in the Value field, as a placeholder for all available namespaces of the cluster.
  6. Select Add matcher.
  7. Select Save changes.
  8. Back in the Log ingest rules screen, add one more rule, and select the Exclude from storage option.
  9. In the Value field, add the namespaces that you want to exclude when ingesting Kubernetes logs.
  10. Select Add matcher.
  11. Select Save changes.

Ingest error logs from a given Kubernetes cluster and namespace

  1. Go to the Kubernetes application and select Clusters.
  2. Select the cluster that you'd like to configure.
  3. Go to > Connection settings > Log Monitoring > Log ingest rules.
  4. Select Add rule and provide the name for your configuration in the Rule name field.
    Make sure that the Include in storage button is turned on, so logs matching this configuration will be stored in Dynatrace.
  5. Select Add condition.
  6. From the Matcher attribute dropdown, select Kubernetes namespace name.
  7. Select one or multiple namespaces from the dropdown inside the Value field. You can input an asterisk (*) in as a placeholder for all available namespaces of the cluster.
  8. Select Add matcher.
  9. Add one more matcher, and set the Matcher attribute as Log record level.
  10. From the Value field dropdown, select Error.
  11. Select Add matcher.
  12. Select Save changes.

On the Log ingest rules screen, arrange the configured rules to prioritize the excluded namespaces rule at the top and the rule including all namespaces at the bottom.

REST API

You can use the Settings API to manage your log ingest rules:

  • View schema;
  • List stored configuration objects;
  • View single configuration object;
  • Create new, edit, or remove existing configuration object.

To check the current schema version for log ingest rules, list all available schemas and look for the builtin:logmonitoring.log-storage-settings schema identifier.

Log ingest rule objects can be configured for the following scopes:

  • tenant – configuration object affects all hosts on a given environment.
  • host_group – configuration object affects all hosts assigned to a given host group.
  • host – configuration object affects only the given host.

To create a log ingest rule using the API:

  1. Create an access token with the Write settings (settings.write) and Read settings (settings.read) scopes.

  2. Use the GET a schema endpoint to learn the JSON format required to post your configuration. The log ingest rules schema identifier (schemaId) is builtin:logmonitoring.log-storage-settings. Here is an example JSON payload with the log ingest rules:

    {
    "items": [
    {
    "objectId": "vu9U3hXa3q0AAAABACpidWlsdGluOmxvZ21vbml0b3JpbmcubG9nLXN0b3JhZ2Utc2V0dGluZ3MABEhPU1QAEEFEMDVFRDZGQUUxNjQ2MjMAJDZkZGU3YzY5LTMzZjEtMzNiZC05ZTAwLWZlNDFmMjUxNzUzY77vVN4V2t6t",
    "value": {
    "enabled": true,
    "config-item-title": "Send kube-system logs",
    "send-to-storage": true,
    "matchers": [
    {
    "attribute": "k8s.container.name",
    "operator": "MATCHES",
    "values": [
    "kubedns",
    "kube-proxy"
    ]
    },
    {
    "attribute": "k8s.namespace.name",
    "operator": "MATCHES",
    "values": [
    "kube-system"
    ]
    }
    ]
    }
    }
    ],
    "totalCount": 1,
    "pageSize": 100
    }

Examples

The examples that follow show the results of various combinations of rules and matchers.

Example 1: Ingest all logs from a specific namespace

This task requires setting one rule with one matcher.

[{
"schemaId": "builtin:logmonitoring.log-storage-settings",
"scope": "tenant",
"value": {
"enabled": true,
"config-item-title": "All logs from kube-system namespace",
"send-to-storage": true,
"matchers": [
{
"attribute": "k8s.namespace.name",
"operator": "MATCHES",
"values": [
"kube-system"
]
}
]
}
}]

Example 2: Send logs from a specific namespace and containers with content containing 'ERROR'

This task requires setting one rule with three matchers.

[{
"schemaId": "builtin:logmonitoring.log-storage-settings",
"scope": "tenant",
"value": {
"enabled": true,
"config-item-title": "Error logs from kube-proxy and kube-dns containers",
"send-to-storage": true,
"matchers": [
{
"attribute": "k8s.namespace.name",
"operator": "MATCHES",
"values": [
"kube-system"
]
},
{
"attribute": "k8s.container.name",
"operator": "MATCHES",
"values": [
"kubedns",
"kube-proxy"
]
},
{
"attribute": "log.content",
"operator": "MATCHES",
"values": [
"*ERROR*"
]
}
]
}
}]

Example 3: Ingest all Kubernetes logs excluding specific namespaces on a specific host group scope

This task requires setting two rules.

[{
"schemaId": "builtin:logmonitoring.log-storage-settings",
"scope": "HOST_GROUP-1D91E46493049D07",
"value": {
"enabled": true,
"config-item-title": "Exclude logs from kube-system namespace",
"send-to-storage": false,
"matchers": [
{
"attribute": "k8s.namespace.name",
"operator": "MATCHES",
"values": [
"kube-system"
]
}
]
}
},{
"schemaId": "builtin:logmonitoring.log-storage-settings",
"scope": "HOST_GROUP-1D91E46493049D07",
"value": {
"enabled": true,
"config-item-title": "All Kubernetes logs",
"send-to-storage": true,
"matchers": [
{
"attribute": "k8s.namespace.name",
"operator": "MATCHES",
"values": [
"*"
]
}
]
}
}]

Frequently asked questions

The requirements for autodiscovery and ingestion of Kubernetes logs are the following:

  • The containerd, or CRI-O container runtime is used;
  • The process running in the container is an important process;
  • Logs are written to the container's stdout/stderr streams;
  • The log file on the Kubernetes node exists for a minimum of one minute after container execution is finished.

No, OneAgent Log Module doesn't offer such a functionality yet, although it is planned in future releases.

For more ingest related FAQs, please consult the Log ingest rules page.