Red Hat Quarkus native applications monitoring with metrics and logs
Red Hat Quarkus is an open source Java framework optimized for GraalVM to make Java a valuable citizen in the world of microservices. Quarkus belongs to the family of full-stack frameworks tailored for Kubernetes. It includes modern Java libraries and follows the latest Java standards.
GraalVM is designed to achieve high performance in the execution of applications written in Java and other JVM languages. It offers two approaches for compiling Java code to an executable:
-
just-in-time (JIT) compilation
-
ahead-of-time (AOT) compilation to a native image
AOT-compiled native images include only the Java code required at runtime, excluding everything else from the libraries and frameworks.
The native images currently are not supported by OneAgent, but it can collect Micrometer metrics and logs from Quarkus native applications.
Learn how Dynatrace can monitor metrics and logs from a Quarkus application compiled as a native image.
Prerequisites
-
Your GraalVM version is supported by Dynatrace.
-
GraalVM is configured to build native images. For details, see the Building a native executable Quarkus guide.
-
OneAgent or Dynatrace Operator is installed on the machine where the application is about to be executed.
The required installation depends on your application:
If your application is running See the instruction for on a virtual machine or bare-metal OneAgent as workload in Kubernetes or OpenShift Dynatrace Operator
Traces
Dynatrace can automatically trace JIT-compiled Quarkus applications executed on OpenJDK HotSpot JVM and GraalVM.
Trace AOT-compiled Quarkus applications
While OneAgent can only trace JIT-compiled applications, you can still export the default Quarkus tracing information using OpenTelemetry. To do so, use the Quarkus-specific configuration parameters to configure the exporter to send trace data to one of the two available endpoints, ActiveGate or OneAgent.
The following example shows how to configure application.properties
to export to a Dynatrace SaaS endpoint. It specifies the API URL and the necessary, percent-encoded Authorization
header with the API token.
1quarkus.application.name=myservice2quarkus.otel.exporter.otlp.traces.endpoint=https://{your-environment-id}.live.dynatrace.com/api/v2/otlp3quarkus.otel.exporter.otlp.traces.headers=authorization=Api-Token%20dt.....4quarkus.log.console.format=%d{HH:mm:ss} %-5p traceId=%X{traceId}, parentId=%X{parentId}, spanId=%X{spanId}, sampled=%X{sampled} [%c{2.}] (%t) %s%e%n
Metrics
Red Hat recommends that you obtain metrics from Quarkus via the quarkus-micrometer-registry-prometheus
library.
To learn how to utilize Micrometer metrics in your Quarkus application, see the Micrometer metrics Quarkus guide.
Dynatrace offers two approaches for obtaining Micrometer metrics from Prometheus: via API or via an extension.
Ingest Micrometer metrics via Dynatrace API
Use the Dynatrace API to ingest metrics obtained from the quarkus-micrometer-resistry-prometheus
library.
To learn more about the ingestion procedure, see Send Micrometer metrics to Dynatrace.
For natively built applications, be sure to follow the Directly in Micrometer approach.
Ingest Micrometer metrics via an extension
Use the Dynatrace Extension 2.0 Framework to ingest Micrometer metrics obtained from the Prometheus data source—you need to create a custom extension for that.
As a starting point, you can use the custom extension example below. It's tailored to the quarkus-micrometer-resistry-prometheus
library. Be sure to use the correct metrics endpoint in your configuration. The default endpoint is localhost:8080/q/metrics
.
1name: custom:com.dynatrace.extension.micrometer-quarkus2version: 1.0.03minDynatraceVersion: "1.247"4author:5 name: Dynatrace67#dashboards:8# - path: "dashboards/dashboard_exporter.json"910#alerts:11# - path: "alerts/alert_socket_usage.json"1213prometheus:14 - group: quarkus metrics15 interval:16 minutes: 117 featureSet: all18 dimensions:19 - key: quarkus20 value: const:quarkus21 subgroups:22 # global counters23 - subgroup: quarkus global counter24 dimensions:25 - key: global_counters26 value: const:global_counters27 metrics:28 # HELP process_uptime_seconds The uptime of the Java virtual machine29 # TYPE process_uptime_seconds gauge30 - key: com.dynatrace.process.global.uptime.seconds31 value: metric:process_uptime_seconds32 type: gauge33 featureSet: global3435 # HELP process_cpu_usage The "recent cpu usage" for the Java Virtual Machine process36 # TYPE process_cpu_usage gauge37 - key: com.dynatrace.process.global.cpu.usage38 value: metric:process_cpu_usage39 type: gauge40 featureSet: global4142 # HELP system_cpu_usage The "recent cpu usage" of the system the application is running in43 # TYPE system_cpu_usage gauge44 - key: com.dynatrace.system.global.cpu.usage45 value: metric:system_cpu_usage46 type: gauge47 featureSet: global4849 # HELP jvm_classes_unloaded_classes_total The total number of classes unloaded since the Java virtual machine has started execution50 # TYPE jvm_classes_unloaded_classes_total counter51 - key: com.dynatrace.jvm.classes.global.uploaded.total52 value: metric:jvm_classes_unloaded_classes_total53 type: count54 featureSet: global5556 # HELP jvm_info_total JVM version info57 # TYPE jvm_info_total counter58 - key: com.dynatrace.jvm.global.info.total59 value: metric:jvm_info_total60 type: count61 featureSet: global6263 # HELP http_server_connections_seconds_max64 # TYPE http_server_connections_seconds_max gauge65 - key: com.dynatrace.http.server.connections.seconds.global.max66 value: metric:http_server_connections_seconds_max67 type: gauge68 featureSet: global6970 # HELP http_server_connections_seconds71 # TYPE http_server_connections_seconds summary72 - key: com.dynatrace.http.server.connections.seconds.active.global.count73 value: metric:http_server_connections_seconds_active_count74 type: count75 featureSet: global76 - key: com.dynatrace.http.server.connections.seconds.active.global.duration.summary77 value: metric:http_server_connections_seconds_duration_sum78 type: gauge79 featureSet: global8081 # HELP process_files_max_files The maximum file descriptor count82 # TYPE process_files_max_files gauge83 - key: com.dynatrace.process.files.global.max84 value: metric:process_files_max_files85 type: gauge86 featureSet: global8788 # HELP http_server_bytes_written_max89 # TYPE http_server_bytes_written_max gauge90 - key: com.dynatrace.http.server.bytes.wrriten.global.max91 value: metric:http_server_bytes_written_max92 type: gauge93 featureSet: global9495 # HELP http_server_bytes_written96 # TYPE http_server_bytes_written summary97 - key: com.dynatrace.http.server.bytes.written.global.count98 value: metric:http_server_bytes_written_count99 type: count100 featureSet: global101 - key: com.dynatrace.http.server.bytes.written.global.summary102 value: metric:http_server_bytes_written_sum103 type: gauge104 featureSet: global105106 # HELP system_load_average_1m The sum of the number of runnable entities queued to available processors and the number of runnable entities running on the available processors averaged over a period of time107 # TYPE system_load_average_1m gauge108 - key: com.dynatrace.system.load.average.global.lm109 value: metric:system_load_average_1m110 type: gauge111 featureSet: global112113 # HELP jvm_gc_overhead_percent An approximation of the percent of CPU time used by GC activities over the last lookback period or since monitoring began, whichever is shorter, in the range [0..1]114 # TYPE jvm_gc_overhead_percent gauge115 - key: com.dynatrace.jvm.gc.overhead.global.percent116 value: metric:jvm_gc_overhead_percent117 type: gauge118 featureSet: global119120 # HELP jvm_threads_daemon_threads The current number of live daemon threads121 # TYPE jvm_threads_daemon_threads gauge122 - key: com.dynatrace.jvm.threads.daemon.global.threads123 value: metric:jvm_threads_daemon_threads124 type: gauge125 featureSet: global126127 # HELP jvm_threads_live_threads The current number of live threads including both daemon and non-daemon threads128 # TYPE jvm_threads_live_threads gauge129 - key: com.dynatrace.jvm.threads.live.global.threads130 value: metric:jvm_threads_live_threads131 type: gauge132 featureSet: global133134 # HELP http_server_requests_seconds135 # TYPE http_server_requests_seconds summary136 - key: com.dynatrace.http.server.bytes.written.global.count137 value: metric:http_server_requests_seconds_count138 type: count139 featureSet: global140 - key: com.dynatrace.http.server.bytes.written.global.summary141 value: metric:http_server_requests_seconds_sum142 type: gauge143 featureSet: global144145 # HELP http_server_requests_seconds_max146 # TYPE http_server_requests_seconds_max gauge147 - key: com.dynatrace.http.server.requests.seconds.max148 value: metric:http_server_requests_seconds_max149 type: gauge150 featureSet: global151152 # HELP process_start_time_seconds Start time of the process since unix epoch.153 # TYPE process_start_time_seconds gauge154 - key: com.dynatrace.process.start.time.global.seconds155 value: metric:process_start_time_seconds156 type: gauge157 featureSet: global158159 # HELP jvm_classes_loaded_classes The number of classes that are currently loaded in the Java virtual machine160 # TYPE jvm_classes_loaded_classes gauge161 - key: com.dynatrace.jvm.classes.loaded.global.max162 value: metric:jvm_classes_loaded_classes163 type: gauge164 featureSet: global165166 # HELP jvm_threads_peak_threads The peak live thread count since the Java virtual machine started or peak was reset167 # TYPE jvm_threads_peak_threads gauge168 - key: com.dynatrace.jvm.threads.peak.global.threads169 value: metric:jvm_threads_peak_threads170 type: gauge171 featureSet: global172173 # HELP system_cpu_count The number of processors available to the Java virtual machine174 # TYPE system_cpu_count gauge175 - key: com.dynatrace.system.cpu.global.counter176 value: metric:system_cpu_count177 type: gauge178 featureSet: global179180 # HELP process_files_open_files The open file descriptor count181 # TYPE process_files_open_files gauge182 - key: com.dynatrace.process.files.open.global.files183 value: metric:process_files_open_files184 type: gauge185 featureSet: global
Logs
Dynatrace offers various options for collecting logs from your applications and environments.
To learn how to set up logging in your Quarkus application, see the Configuring logging Quarkus guide.
For the procedure below, we assume your application writes logs to the /var/log/quarkus-app.log
file.
Start your Quarkus native application.
- In the Dynatrace menu, go to Hosts and select your host.
- Scroll down to the Process analysis section and, in the list of processes, select the process of your Quarkus native application.
- On the right side of the Process panel, select
> Settings.
- In the process group settings, select Log monitoring > Add new log for monitoring.
- Enter the full path of your log file. Be sure to follow the log path requirements.
- Select Save changes.
- Include the added log files in your log storage.