Deep monitoring an application with Dynatrace OneAgent implies an increase of per-application memory demand compared to execution without Dynatrace OneAgent. In addition to the memory required to load the OneAgent code module binary code into the application process, memory is also utilized to maintain monitored application state information, communication buffers, etc.
Memory demand isn't a constant number or proportion of application memory requirements, but depends on technology, monitoring configurations, application properties, and the executed load. See About memory demand variance below for further insights on memory demand.
As outlined above, monitoring memory demand depends on multiple factors. To facilitate straight forward resource planning, we recommend that you account for an additional 200MB memory budget for monitored application processes. This number will suffice monitoring of a vast number of applications. Empirical observations show memory demand well below 100MB for most applications.
The monitoring memory demand refers to resident set size (RSS) or equivalent quantification on non Linux operating systems. RSS is a key quantifier for applying memory limits to processes.
Kubernetes and other cloud platforms feature definition of memory limits for workloads. The defined limits apply (roughly spoken) to RSS and workloads are automatically terminated once they exceed the defined memory limit.
As Dynatrace OneAgent code modules increase memory demand of monitored applications, memory limits must be adjusted accordingly.
OneAgent code module deep monitoring memory demand can't be expressed exactly as a constant number or proportion of memory consumed by the application process. It's the sum of memory required for basic OneAgent code module operation (for example, communication buffers) and dynamic monitoring data gathered by OneAgent code modules. Dynamic monitoring data memory demand depends on configuration settings, application base technology, and the application itself.
Code level visibility and hotspot analysis mandate the recording of function execution time and frequency. Therefore, the number of functions in the application and their execution defines the number data items and ultimately the memory footprint needed to measure function performance. The same applies to distributed traces information—the memory demand for gathering distributed traces information depends on the number of concurrent requests processed by the application and the complexity (i.e., the length of the PurePath) of the executions triggered by these requests.
Custom service monitoring is an example for monitoring configuration dependent memory demand. The definition of a custom service increases base memory budget for selected function instrumentation and dynamical memory budget for the increased number of distributed traces data collected for custom service calls. In .NET technology, instrumentation of additional assemblies for the custom service can significantly increase startup memory demand (see below).
OneAgent code modules are optimized to efficiently use memory and to free resources when they're no longer needed, to burden application execution as little as possible. So, memory demand might vary over application execution time.
Dependent on OneAgent code module, memory demand might peak at application startup. This is especially true for .NET technology. Preparing .NET assemblies for monitoring causes memory footprint to spike, as assembly code temporarily resides twice in memory. Once the injection process is completed, the .NET runtime retains both the original assemblies and the instrumented versions of the application logic in memory. This is a known issue of Microsoft .NET technology and can't be mitigated by the Dynatrace OneAgent.