OpenTelemetry Host Monitoring is a Dynatrace feature that transforms raw telemetry data from OpenTelemetry Collectors into actionable insights. Rather than simply ingesting metrics, logs, and traces, Dynatrace automatically builds meaningful context around your infrastructure. It creates host and process entities, establishes topology relationships, and presents data through purpose-built analysis screens.
With the extension, you can:
Infrastructure & Operations.
There, you'll find dedicated views for your otel:host and otel:process entities, complete with metric visualizations, related logs, and topology.This use case and its reference configuration are designed primarily for VMs and bare-metal hosts with a Linux OS.
journald from the pipeline–journald is only available for Linux OS.This use case assumes that you have:
hostmetrics and journald receivers, and the resourcedetection, filter, and transform processors.
A reference configuration is available in the Dynatrace OTel Collector's GitHub repo, see host-metrics.yaml.
In
Infrastructure & Operations, the ready-made visualizations are optimized for this configuration.
You can use this configuration as-is, or modify it to meet your specific needs. However, if you make significant changes to the configuration, the visualizations may not work as intended. For entity extraction to work, you need to keep the existing host and process attributes.
For our configuration, we configured the following components that are specific to this extension.
Under receivers, we specify the following receivers:
The hostmetrics receiver collects host-level metrics.
It is configured with three collection intervals: 1 minute, 5 minutes, and 1 hour.
The journald receiver collects systemd journal logs from the host and ingests them into the logs pipeline alongside your metrics.
It is configured to read from /var/log/journal (the default persistent journal path on Linux hosts) and applies move operators to rename journal fields to OpenTelemetry semantic conventions.
body._PID is renamed to body.pidbody._EXE is renamed to attributes["process.executable.name"]body.MESSAGE is renamed to body.messageThis ensures that host logs are linked to the same process entities as the hostmetrics data, enabling correlation between metrics and logs in Dynatrace.
The journald receiver is supported on Linux OS only, and requires the journalctl binary on the host.
The Collector process must have permission to read the systemd journal.
On Linux hosts, add the user running the Collector to the systemd-journal group.
For full details, see Use journald to ingest systemd journal logs with the OpenTelemetry Collector.
Under processors, we specify the following processors:
resourcedetection processor, which can be used to detect resource information from the host, in a format that conforms to the OpenTelemetry resource semantic conventions, and append or override the resource value in telemetry data with this information.filter, which cleans up unnecessary metrics dimensions.transform, which optimizes the visualizations in
Infrastructure & Operations.Under exporters, we specify the otlp_http exporter and configure it with our Dynatrace API URL and the required authentication token.
For this purpose, we set the following two environment variables and reference them in the configuration values for endpoint and Authorization.
DT_ENDPOINT contains the base URL of the Dynatrace API endpoint (for example, https://{your-environment-id}.live.dynatrace.com/api/v2/otlp).DT_API_TOKEN contains the API token.Once you've set up your Collectors and the Host Monitoring extension, use
Infrastructure & Operations to monitor your OpenTelemetry ingest.
Go to
Infrastructure & Operations > Technologies and select the OpenTelemetry Host Monitoring extension.
This extension automatically generates topology for infrastructure monitored via the OpenTelemetry Collector. Specifically, it creates the following entity types based on metadata extracted from metrics, logs, and traces:
| Entity type | Entity ID |
|---|---|
OpenTelemetry Host | dt.entity.otel:host |
OpenTelemetry Process | dt.entity.otel:process |
These entities enable Dynatrace to correlate your metrics, logs, and spans and provide unified context across your monitored environment.
If you send your application telemetry to your local host OpenTelemetry Collector, it will automatically enrich the data with the required host attributes so that the signals are correctly attached to the OpenTelemetry host entity.
To enrich application telemetry with the corresponding process entity, all signals (metrics, logs, and spans) need to have the process.executable.name resource attribute.
For logs and spans to have this attribute, you need to initialize your OTel SDK with the process resource detector.
If this is not implemented for your technology's OTel SDK, you can always set the process.executable.name attribute through the OTEL_RESOURCE_ATTRIBUTES environment variable.
The OpenTelemetry Host Monitoring extension provides alert templates that you can use to create alerts based on the imported data. For example, you can use alerts to get notifications when a metric crosses a fixed value. Alerts can learn from historical data to detect unusual behavior, and account for recurring patterns such as daily or weekly cycles.
Dynatrace provides alerts with pre-configured thresholds that you can use as-is or customize to meet your requirements.
To use these templates and set up alerts:
Extensions > OpenTelemetry Host Monitoring > Extension content.For more information, see Introducing OpenTelemetry collector self-monitoring dashboards.
The reference configuration and this use case are optimized for VMs and bare-metal hosts. You can run OTel host monitoring on Kubernetes nodes, but there are additional deployment requirements and important caveats to consider.
To collect host-level metrics from every node in your cluster, deploy the Collector as a DaemonSet. This ensures one Collector pod runs on each node and reports that node's metrics.
The hostmetrics receiver works without any additional configuration on Kubernetes.
The same receiver configuration you use on VMs applies to containerized deployments.
To collect journald logs on Kubernetes nodes, the Collector must run as root (runAsUser: 0) because container isolation prevents group-based journal access.
You also need to mount the journal directory from the host and adjust the directory setting to the mounted path.
On Kubernetes, the in-memory journal path is typically /run/log/journal rather than the persistent /var/log/journal used on VMs.
See Use journald to ingest systemd journal logs with the OpenTelemetry Collector for the full Kubernetes deployment configuration, including the required security context and host volume mounts.
If you run both OTel host monitoring and Kubernetes cluster monitoring on the same nodes, be aware that some metrics overlap: the same measurements may be ingested as two separate metric keys. This is because they have different metric names that follow different semantic conventions, so Dynatrace ingests them as separate metric keys.
The following table shows common overlapping metrics:
hostmetrics receiver | kubeletstats receiver | What they measure |
|---|---|---|
|
| Node CPU usage |
|
| Node memory usage |
|
| Node filesystem usage |
|
| Node network I/O |
This overlapp occurs because the Kubernetes monitoring use case uses the kubeletstats receiver, which reports node-level resource metrics that represent the same underlying data as the hostmetrics receiver.
To avoid unnecessary duplication on Kubernetes, use only Kubernetes monitoring or only OTel host monitoring, if possible:
kubeletstats receiver. Adding hostmetrics on top duplicates the node-level resource metrics.
Infrastructure & Operations.filter processor to drop overlapping node-level metrics from one of the two pipelines.
For example, filter out system.cpu.*, system.memory.*, system.filesystem.*, and system.network.* from the host monitoring pipeline if the Kubernetes monitoring pipeline already covers them.system.processes.created is only available on BSD and Linux operating systems.process.disk.io requires running the Collector with privileged access.
If you don't do this, the metric will be prevented from being captured.journald receiver is supported on Linux OS only.
It is not available on Windows or macOS.
If your host has Windows or macOS, remove all references to journald from the pipeline.