Control-M Jobs v2 extension

  • Latest Dynatrace
  • Extension
  • Published Oct 27, 2025

Monitor your Control-M jobs execution status.

Get a general overview of your Control-M environment and a quick entry to all the related entities through the dashboard included with the extensionKeep track of all the jobs being run on your Control-M instance with the Unified Analysis screens created specifically for this extensionMonitor the status of each individual job and the logs it has produced on the last run to quickly diagnose any issue
1 of 3Get a general overview of your Control-M environment and a quick entry to all the related entities through the dashboard included with the extension

Control-M enables you to automate the scheduling and processing of your business workflows across various platforms and applications.

Monitor, understand and troubleshoot these business-critical assets with the Control-M extension.

Get started

Requirements

  • Dynatrace version 1.309+
  • ActiveGate version 1.289+
  • ActiveGate with the Extensions module enabled.
  • Control-M instance with the automation API enabled.
  • User with access permissions to the automation API.

Activation

  1. Find the extension in Dynatrace Hub to and add it to your environment.

  2. Add a monitoring configuration. This is a remote extension and runs on an ActiveGate of an ActiveGate group of your choice. Set the following configuration parameters:

    • URL

      The address for the Control-M automation API to query.

    • Username

      Username used for authentication against the Control-M automation API.

    • Password

      Password for the above user.

    • Server

      If this parameter is set, the API call will receive a server filter, effectively retrieving only jobs that belong to the specified server.

    • Folder

      If this parameter is set, the API call will receive a folder filter, effectively retrieving only jobs that belong to the specified folder.

    • Filter job by name

      List of rules used to filter the jobs whose specific status and execution logs will be reported. As long as the job verifies one of the rules, its status and execution log will be reported to Dynatrace. Use this feature to exclude irrelevant jobs or to include important ones only. The excluded jobs will still be taken into account for the aggregated metrics. This filtering is handled on the data returned from API.

    • Filter job by folder name Same as above, but using its folder’s name instead, in case there are complete folders that can be ignored or included. A job will be monitored if it verifies at least one of the rules, independently of if it’s a job’s name or folder’s name rule.

    • Send error events

      If this flag is on, Dynatrace will generate an error event (and its subsequent problem) in Dynatrace if a job has a failed execution status, regardless of if the job is filtered in or out.

    • Debug

      Produces more verbose logs. Enable only when troubleshooting or support makes that recommendation.

    • Feature sets

      Lastly, select which feature sets (metric groups) you would like this configuration to collect. You can choose to monitor the summary of how many jobs are in each state and/or the individual job status.

Alerting

If the configuration parameter for Send error events is enabled, the extension will send an error event whenever any job is in a Ended Not OK status. This event will be attached to the job entity.

As an alternative, it is possible to create a metric event based on the controlm.job.status metric for whenever a job is in an unwanted state, where it is also possible to attach the event to any of the generic types created by the extension.

Feature sets

When activating your extension using monitoring configuration, you can limit monitoring to one of the feature sets. To work properly the extension has to collect at least one metric after the activation.

In highly segmented networks, feature sets can reflect the segments of your environment. Then, when you create a monitoring configuration, you can select a feature set and a corresponding ActiveGate group that can connect to this particular segment.

All metrics that aren't categorized into any feature set are considered to be the default and are always reported.

A metric inherits the feature set of a subgroup, which in turn inherits the feature set of a group. Also, the feature set defined on the metric level overrides the feature set defined on the subgroup level, which in turn overrides the feature set defined on the group level.

Metric nameMetric keyDescription
Job number of runscontrolm.job.number_of_runsNumber of runs that a job has individually had.
Metric nameMetric keyDescription
Job statuscontrolm.job.statusMetric that indicates the current status of each job. A value of 100 equals a job that ended correctly, a value of 0 indicates a job that ended incorrectly, was not found in AJF or has an unknown state. A value of 50 indicates a job in the middle of execution. A value of 25 indicates a job that is waiting.
Metric nameMetric keyDescription
Jobs waiting for conditioncontrolm.job_counters.wait_conditionNumber of jobs with state "Waiting For Condition"
Jobs ended okcontrolm.job_counters.ended_okNumber of jobs with state "Ended OK"
Jobs being executedcontrolm.job_counters.executingNumber of jobs with state "Executing"
Jobs waiting for resourcecontrolm.job_counters.wait_resourceNumber of jobs with state "Waiting For Resource"
Jobs waiting for usercontrolm.job_counters.wait_userNumber of jobs with state "Wait User"
Jobs waiting for hostcontrolm.job_counters.wait_hostNumber of jobs with state "Waiting For Host"
Jobs ended not okcontrolm.job_counters.ended_not_okNumber of jobs with state "Ended Not OK"
Jobs with unknown statecontrolm.job_counters.unknownNumber of jobs with an unknown state
Jobs not in AJFcontrolm.job_counters.not_in_ajfNumber of jobs with state "Not In AJF(Active Job File)"
Related tags
ComputePythonWorkload automationBMCInfrastructure Observability