Monitor hosts that send OpenTelemetry data to Dynatrace

  • Latest Dynatrace
  • How-to guide
  • 2-min read

OpenTelemetry Host Monitoring is a Dynatrace feature that transforms raw telemetry data from OpenTelemetry Collectors into actionable insights. Rather than simply ingesting metrics, logs, and traces, Dynatrace automatically builds meaningful context around your infrastructure. It creates host and process entities, establishes topology relationships, and presents data through purpose-built analysis screens.

With the extension, you can:

  • Explore your OpenTelemetry-monitored hosts and processes directly within Infrastructure & Operations Infrastructure & Operations. There, you'll find dedicated views for your otel:host and otel:process entities, complete with metric visualizations, related logs, and topology.
  • Use auto-generated entities (based on extracted metadata) to correlate metrics, logs, and spans and provide unified context across your monitoring environment.
  • Set up alerts to notify you of condition changes on the host, like load level increases.

This use case and its reference configuration are designed primarily for VMs and bare-metal hosts with a Linux OS.

  • If you want to run host monitoring on Kubernetes nodes, see Host monitoring on Kubernetes nodes for deployment requirements and limitations.
  • If you want to run host monitoring on Windows OS or macOS, remove all references to journald from the pipeline–journald is only available for Linux OS.

Prerequisites

This use case assumes that you have:

Reference configuration

A reference configuration is available in the Dynatrace OTel Collector's GitHub repo, see host-metrics.yaml. In Infrastructure & Operations Infrastructure & Operations, the ready-made visualizations are optimized for this configuration.

You can use this configuration as-is, or modify it to meet your specific needs. However, if you make significant changes to the configuration, the visualizations may not work as intended. For entity extraction to work, you need to keep the existing host and process attributes.

Components

For our configuration, we configured the following components that are specific to this extension.

Receivers

Under receivers, we specify the following receivers:

hostmetrics

The hostmetrics receiver collects host-level metrics. It is configured with three collection intervals: 1 minute, 5 minutes, and 1 hour.

  • Use short intervals for the most important metrics to ensure that Dynatrace provides fast alerts for important changes.
  • Send non-critical metrics less frequently to help control consumption and therefore costs.

journald

The journald receiver collects systemd journal logs from the host and ingests them into the logs pipeline alongside your metrics. It is configured to read from /var/log/journal (the default persistent journal path on Linux hosts) and applies move operators to rename journal fields to OpenTelemetry semantic conventions.

  • body._PID is renamed to body.pid
  • body._EXE is renamed to attributes["process.executable.name"]
  • body.MESSAGE is renamed to body.message

This ensures that host logs are linked to the same process entities as the hostmetrics data, enabling correlation between metrics and logs in Dynatrace.

The journald receiver is supported on Linux OS only, and requires the journalctl binary on the host. The Collector process must have permission to read the systemd journal.

On Linux hosts, add the user running the Collector to the systemd-journal group.

For full details, see Use journald to ingest systemd journal logs with the OpenTelemetry Collector.

Processors

Under processors, we specify the following processors:

  • resourcedetection processor, which can be used to detect resource information from the host, in a format that conforms to the OpenTelemetry resource semantic conventions, and append or override the resource value in telemetry data with this information.
  • filter, which cleans up unnecessary metrics dimensions.
  • transform, which optimizes the visualizations in Infrastructure & Operations Infrastructure & Operations.

Exporters

Under exporters, we specify the otlp_http exporter and configure it with our Dynatrace API URL and the required authentication token.

For this purpose, we set the following two environment variables and reference them in the configuration values for endpoint and Authorization.

How-to

Monitor your hosts

Once you've set up your Collectors and the Host Monitoring extension, use Infrastructure & Operations Infrastructure & Operations to monitor your OpenTelemetry ingest. Go to Infrastructure & Operations Infrastructure & Operations > Technologies and select the OpenTelemetry Host Monitoring extension.

Topology

This extension automatically generates topology for infrastructure monitored via the OpenTelemetry Collector. Specifically, it creates the following entity types based on metadata extracted from metrics, logs, and traces:

Entity typeEntity ID

OpenTelemetry Host

dt.entity.otel:host

OpenTelemetry Process

dt.entity.otel:process

These entities enable Dynatrace to correlate your metrics, logs, and spans and provide unified context across your monitored environment.

Enrich application telemetry

If you send your application telemetry to your local host OpenTelemetry Collector, it will automatically enrich the data with the required host attributes so that the signals are correctly attached to the OpenTelemetry host entity.

To enrich application telemetry with the corresponding process entity, all signals (metrics, logs, and spans) need to have the process.executable.name resource attribute. For logs and spans to have this attribute, you need to initialize your OTel SDK with the process resource detector.

If this is not implemented for your technology's OTel SDK, you can always set the process.executable.name attribute through the OTEL_RESOURCE_ATTRIBUTES environment variable.

Set up alerts

The OpenTelemetry Host Monitoring extension provides alert templates that you can use to create alerts based on the imported data. For example, you can use alerts to get notifications when a metric crosses a fixed value. Alerts can learn from historical data to detect unusual behavior, and account for recurring patterns such as daily or weekly cycles.

Dynatrace provides alerts with pre-configured thresholds that you can use as-is or customize to meet your requirements.

To use these templates and set up alerts:

  1. Go to Extensions Extensions > OpenTelemetry Host Monitoring > Extension content.
  2. Find any alert template that you're interested in and select New Alert.
  3. Configure the alert according to the available options.

For more information, see Introducing OpenTelemetry collector self-monitoring dashboards.

Host monitoring on Kubernetes nodes

The reference configuration and this use case are optimized for VMs and bare-metal hosts. You can run OTel host monitoring on Kubernetes nodes, but there are additional deployment requirements and important caveats to consider.

Deployment

To collect host-level metrics from every node in your cluster, deploy the Collector as a DaemonSet. This ensures one Collector pod runs on each node and reports that node's metrics.

The hostmetrics receiver works without any additional configuration on Kubernetes. The same receiver configuration you use on VMs applies to containerized deployments.

journald on Kubernetes

To collect journald logs on Kubernetes nodes, the Collector must run as root (runAsUser: 0) because container isolation prevents group-based journal access. You also need to mount the journal directory from the host and adjust the directory setting to the mounted path.

On Kubernetes, the in-memory journal path is typically /run/log/journal rather than the persistent /var/log/journal used on VMs. See Use journald to ingest systemd journal logs with the OpenTelemetry Collector for the full Kubernetes deployment configuration, including the required security context and host volume mounts.

Metric overlap with Kubernetes monitoring

If you run both OTel host monitoring and Kubernetes cluster monitoring on the same nodes, be aware that some metrics overlap: the same measurements may be ingested as two separate metric keys. This is because they have different metric names that follow different semantic conventions, so Dynatrace ingests them as separate metric keys.

The following table shows common overlapping metrics:

hostmetrics receiverkubeletstats receiverWhat they measure

system.cpu.*

k8s.node.cpu.*

Node CPU usage

system.memory.*

k8s.node.memory.*

Node memory usage

system.filesystem.*

k8s.node.filesystem.*

Node filesystem usage

system.network.*

k8s.node.network.*

Node network I/O

This overlapp occurs because the Kubernetes monitoring use case uses the kubeletstats receiver, which reports node-level resource metrics that represent the same underlying data as the hostmetrics receiver.

To avoid unnecessary duplication on Kubernetes, use only Kubernetes monitoring or only OTel host monitoring, if possible:

  • Use Kubernetes monitoring only if you don't require process-level detail and host entity topology. Kubernetes cluster monitoring provides node-level metrics through the kubeletstats receiver. Adding hostmetrics on top duplicates the node-level resource metrics.
  • Use host monitoring only if you don't require Kubernetes-specific object metrics such as pods and deployments. OTel host monitoring provides host and process entities with topology in Infrastructure & Operations Infrastructure & Operations.
  • If you require both use cases, use the filter processor to drop overlapping node-level metrics from one of the two pipelines. For example, filter out system.cpu.*, system.memory.*, system.filesystem.*, and system.network.* from the host monitoring pipeline if the Kubernetes monitoring pipeline already covers them.

Limitations

  • The metric system.processes.created is only available on BSD and Linux operating systems.
  • The metric process.disk.io requires running the Collector with privileged access. If you don't do this, the metric will be prevented from being captured.
  • The journald receiver is supported on Linux OS only. It is not available on Windows or macOS. If your host has Windows or macOS, remove all references to journald from the pipeline.
Related tags
Application Observability