Ceph storage extension

  • Latest Dynatrace
  • Extension
  • Published Oct 27, 2025

Monitor usage of Ceph storage system at both the client side and the host level.

DashboardUA Screen
1 of 2Dashboard

Get started

Overview

Monitor usage of the Ceph storage system at both the client side and the host level.

Use Ceph storage extension to:

  • Monitor usage and performance of your Ceph platform.
  • Always have live information about host resources and data flow.
  • Shorten analysis time required to find out the root cause of possible system failures to the minimum.

Use cases

  • Monitor host resources usage and its capacity levels.
  • Collect data regarding active and inactive Ceph object storage daemons.
  • Observe system data flow in terms of write and read operations, both for the cluster as a whole and for the particular OSD's.

Requirements

  • Ceph sotrage running in your environment.
  • Activated the extension in your environment using in-product Hub.

Details

Licensing and cost

The Ceph storage ingests custom metrics which, depending on your licensing model, can either consume DDUs or metrics on your Dynatrace Platform Subscription.

Below you can see the formula for estimating license consumption for this extension:

  • DDUs for all metrics enabled:
((3 * # of monitors)
+ (7 * # of clusters)
+ (13 * # of OSDs)
+ (9 * # of placement groups))
* 60 min * 24 hrs * 365 days * 0.001
  • Data points for all metrics enabled:
((3 * # of monitors)
+ (7 * # of clusters)
+ (13 * # of OSDs)
+ (9 * # of placement groups))
* 60 min * 24 hrs * 365 days

Feature sets

When activating your extension using monitoring configuration, you can limit monitoring to one of the feature sets. To work properly the extension has to collect at least one metric after the activation.

In highly segmented networks, feature sets can reflect the segments of your environment. Then, when you create a monitoring configuration, you can select a feature set and a corresponding ActiveGate group that can connect to this particular segment.

All metrics that aren't categorized into any feature set are considered to be the default and are always reported.

A metric inherits the feature set of a subgroup, which in turn inherits the feature set of a group. Also, the feature set defined on the metric level overrides the feature set defined on the subgroup level, which in turn overrides the feature set defined on the group level.

Metric nameMetric keyDescription
OSD Apply Latencyceph_osd_apply_latency_msLatency of the "commit" operation on the OSD
OSD Commit Latencyceph_osd_commit_latency_msLatency of the "commit" operation on the OSD
Total OSD Write Latencyceph_osd_op_w_latency_sumTotal latency of the "write" operations on the OSD
Total OSD Read Latencyceph_osd_op_r_latency_sumTotal latency of the "read" operations on the OSD
Metric nameMetric keyDescription
Total Capacityceph_cluster_total_bytesTotal cluster capacity in bytes
Used Capacityceph_cluster_total_used_bytesUsed cluster capacity in bytes
Monitor Metadataceph_mon_metadataPlaceholder metric to get monitor metadata dimensions from exporter
OSD Metadataceph_osd_metadataPlaceholder metric to get OSD metadata dimensions from exporter
Metric nameMetric keyDescription
OSDs INceph_osd_inStorage daemons in the cluster
OSDs UPceph_osd_upStorage daemons running
Placement groupsceph_osd_numpgPlacement groups
Metric nameMetric keyDescription
PG Activeceph_pg_activePlacement group active per pool
PG Downceph_pg_downPlacement group down per pool
PG Cleanceph_pg_cleanPlacement group clean per pool
PG Backfill Too Fullceph_pg_backfill_toofullPlacement group backfill_toofull per pool
PG Degradedceph_pg_degradedPlacement group degraded per pool
PG Failed Repairceph_pg_failed_repairPlacement group failed repair per pool
PG Incompleteceph_pg_incompletePlacement group incomplete per pool
PG Staleceph_pg_stalePlacement group stale per pool
PG Inconsistentceph_pg_inconsistentPlacement group inconsistent per pool
Metric nameMetric keyDescription
Open Sessionsceph_mon_num_sessionsNumber of open monitor sessions
Quorumceph_mon_quorum_statusMonitor daemons in quorum
Metric nameMetric keyDescription
Objects Countceph_pool_objectsNumber of objects in pool
Objects Recoveredceph_pool_num_objects_recoveredNumber of recovered objects in pool
Bytes Recoveredceph_pool_num_bytes_recoveredNumber of recovered bytes in pool
Pool Objects Quotaceph_pool_quota_objectsObject quota set for pool
Pool Bytes Quotaceph_pool_quota_bytesByte quota set for pool
Metric nameMetric keyDescription
Bytes Writtenceph_osd_op_w_in_bytesTotal sum of bytes written to OSD
Bytes Readceph_osd_op_r_out_bytesTotal sum of bytes read from OSD
Write Operationsceph_osd_op_wTotal sum of write operations performed on OSD
Read Operationsceph_osd_op_rTotal sum of read operation performed on OSD
Recovery Operationsceph_osd_recovery_opsNumber of recovery operations in OSD
Related tags
StoragePrometheusStorageRed HatInfrastructure Observability