This page describes system requirements for the Dynatrace OpenTelemetry Collector Distribution for different use cases.
The numbers below were gathered using an Azure virtual machine of type Dadsv5-series (AMD EPYC 7763v Genoa CPU) with 4 vCPUs and 16GB RAM.
The requirements for the Dynatrace Collector are based on a load scenario with the following numbers per second:
The recommended resources for this combined scenario are:
If you need additional data processing (for example, filter or transform processors), system requirements will increase.
Check out specific scaling and infrastructure architecture considerations on our separate page.
The Dynatrace Collector was also tested under heavier scenarios than above but only with single signal types. The following table shows performance data based on the following base data sizes:
This table shows throughput scenarios based on the above data sizes with different amounts of load and different used protocols.
Scenario (traces, metrics, logs per second) | CPU cores | RAM (MiB) |
---|---|---|
OTLP-HTTP 10k traces | 0.25 | 100 |
OTLP-HTTP 100k traces | 1.5 | 120 |
OTLP-HTTP 10k metrics | 0.25 | 110 |
OTLP-HTTP 100k metrics | 1 | 100 |
Syslog 10k logs 1 per batch | 0.2 | 100 |
Syslog 10k logs 100 per batch | 0.2 | 100 |
Syslog 70k logs 1 per batch | 1 | 100 |
Syslog 70k logs 100 per batch | 0.5 | 110 |
Additional metrics based load scenarios were done based on Prometheus scraping with the following base settings:
This table shows the load results with different scenarios.
Scenario | CPU cores | RAM (MiB) |
---|---|---|
1 endpoint (10k data points each) | 0.5 | 300 |
1 endpoint (1k data points each) | 0.1 | 140 |
5 endpoints (1k data points each) | 1.5 | 250 |
10 endpoints (1k data points each) | 2 | 500 |