Databricks Workspace extension

  • Latest Dynatrace
  • Extension
  • Published Oct 27, 2025

Remotely monitor your Databricks workspaces.

Databricks Jobs Dashboard with all the metrics available.Databricks Jobs Service that is created when Job Run traces are ingested.Databricks Jobs Trace example with a Job and a single task.
1 of 3Databricks Jobs Dashboard with all the metrics available.

Get started

Overview

With the Dynatrace Databricks Workspace extension, you can remotely monitor your Databricks workspaces.

This extension works in harmony with the OneAgent-based Databricks extension, but is also ideal for workspaces and clusters where the OneAgent cannot be installed, such as Databricks serverless compute.

Use cases

  • Gather Databricks Job Run metrics, including success rate and job duration.
  • For Databricks Jobs running on all-purpose and job compute clusters, understand the cost of these jobs (currently, Azure Databricks is supported).
  • Ingest job and task run information as traces for further analysis.

Requirements

Databricks API version 2.1 is used for the APIs below:

For the databricks.job.cost metric, currently only Azure Databricks workspaces are supported.

Activation and setup

  1. Install Dynatrace Environment ActiveGate.
  2. Ensure connectivity between this ActiveGate and your Databricks workspace URL.
  3. Create a Databricks access token for your Databricks workspace.
  4. Create a Dynatrace access token with the openTelemetryTrace.ingest scope.
  5. Create a new monitoring configuration in Dynatrace, using the URL and tokens above.

Details

This extension remotely queries the Databricks list job runs and get cluster info APIs using the provided Databricks URL and access token.

With that information, it calculates and reports the various metrics selected from the feature sets in the Dynatrace monitoring configuration.

If trace ingestion is configured, the extension transforms the data from the Databricks APIs into OpenTelemetry traces with the job as the parent span and the tasks in that job as child spans.

Only once a job is completed will its metrics and trace be ingested. This means that data about a job is not ingested while it is running.

Licensing and cost

If all the feature sets are enabled, the number of metric datapoints is:

7 * # of Jobs

If traces are configured to be ingested, the number of spans is:

# of Jobs * (1 + Tasks per Job)

Feature sets

When activating your extension using monitoring configuration, you can limit monitoring to one of the feature sets. To work properly the extension has to collect at least one metric after the activation.

In highly segmented networks, feature sets can reflect the segments of your environment. Then, when you create a monitoring configuration, you can select a feature set and a corresponding ActiveGate group that can connect to this particular segment.

All metrics that aren't categorized into any feature set are considered to be the default and are always reported.

A metric inherits the feature set of a subgroup, which in turn inherits the feature set of a group. Also, the feature set defined on the metric level overrides the feature set defined on the subgroup level, which in turn overrides the feature set defined on the group level.

Metric nameMetric keyDescription
Job Run Durationdatabricks.job.duration.run
Job Success Ratedatabricks.job.success_rate
Metric nameMetric keyDescription
Job Setup Durationdatabricks.job.duration.setup
Job Excecution Durationdatabricks.job.duration.execution
Job Cleanup Durationdatabricks.job.duration.cleanup
Job Queue Durationdatabricks.job.duration.queue
Metric nameMetric keyDescription
Job Cost (Approx)databricks.job.cost
Related tags
AnalyticsPythonData Processing/AnalyticsDatabricksInfrastructure Observability