Latest Dynatrace
To build a workflow, you need to create an empty workflow, define a trigger, and add tasks. For a more detailed configuration of the control flow, make use of the task conditions.
If you create a new workflow or open an existing one, the editor presents the new workflow. The workflow editor consists of three sections:
The title section shows you the title of your workflow and offers some interactions with the workflow.
The workflow editor pane (the main area under the title) displays a graphical representation of the workflow. It always contains a trigger node that provides access to the trigger configuration. The tasks and their connections (transitions) give you an idea about the control flow.
The details pane on the right provides access to detailed settings and properties of the current selection in the workflow editor.
You can edit a workflow after creating and running it. Go to Edit workflow in the Title section.
To ensure the workflow executes as expected, we recommend running the workflow after each edit.
The trigger defines what makes a workflow run. A trigger can be a schedule, an event, a manual interaction (on demand), or an API request.
Tasks define the inputs for actions, have options to loop, retry or timeout action processing, and define the conditions that make them run. The conditions are either related to states of predecessors or custom expressions.
To add a task, hover over the trigger node or any existing task in the workflow and select Add task. The tasks run as a directed acyclic graph (DAG). This means the tasks are ordered sequentially without any loops and they run in one direction.
To add a task to a workflow
It's not possible to add a Run this task if and Else condition to the first task. As a direct successor of the trigger, no task can define a state condition to a predecessor, as none exists. Nevertheless, a custom condition can be configured to evaluate the properties of an event trigger context.
To draw a transition for a task to build a predecessor/successor dependency as an arrow
While developing or testing a workflow, you often run a workflow over and over again. In such cases, it's often helpful to temporarily disable some tasks in the workflow.
If a task is disabled, it is skipped. In the case of default condition configuration (success and skipped), your workflow behaves the same as if the task ran successfully. The only difference is that the skipped task doesn't produce any result. Disabled tasks are rendered in the editor and monitored in a different style to indicate their state.
To disable a task, open the workflow for editing, and do one of the following
To enable a disabled task, open the workflow for editing, and do one of the following.
To delete a task from your workflow, open the workflow for editing, select a task and do one of the following
Save your workflow.
To delete a task transition/condition, open the workflow for editing, select the task with the incoming transition as the state condition, and do the following
A task condition decides whether a task should run. You can express task conditions based on the final state of the predecessor task and as a custom condition to implement any custom logic. You can find the task conditions on the right-hand side in the details pane.
To find the task condition
You can read the state conditions as a sentence, starting with Run this task if, followed by a list of all predecessor tasks and the configured state condition.
To configure the state condition, select one of the following
optional You can also configure a custom condition in the details pane on the right, below the And custom condition was met expression.
A custom condition is formulated as a comparison or any other expression that evaluates to a boolean (true or false). For example, if you want to have a certain task only to be executed if the predecessor task_1
returns a certain value bar
, then the custom condition could be the following: {{ result("task_1").foo == "bar" }}
.
The task conditions also enable you to define what should happen if neither the state condition nor the custom condition is matched using the Else section.
For example, the problem_details
task returns the following result with information on a Davis AI problem.
{"title": "CVE-2023-36049","status": "OPEN","displayId": "S-1760","technology": "DOTNET","description": ".NET, .NET Framework, and Visual Studio Elevation of Privilege Vulnerability","riskAssessment": {"riskLevel": "CRITICAL","riskScore": 9.8,},"affectedEntities": ["PROCESS_GROUP_INSTANCE-18DF39B09D1CD50E","PROCESS_GROUP_INSTANCE-AF30112BB85EE3CC","PROCESS_GROUP_INSTANCE-DE1745778EC39B58"],}
If you want to send an escalation email task to the CISO office for highly critical issues with a risk score of 9 or greater, you can use the following custom conditions on the escalation email task.
{{ result("problem_details")["problem"]["riskAssessment"]["riskScore"] >= 9 }}
Also, you would like to inform your ops team of critical security issues affecting many entities. In this case, you need to check the number of affected entities and the risk level. You can use the following custom conditions.
{{ result("problem_details")["affected_entities"] | length > 10 and result("problem_details")["problem"]["riskAssessment"]["riskLevel"] == "CRITICAL" }}
You want to automatically deploy changes to a software project to a review system so that you can evaluate the suggested changes of a pull request (PR). Once the changes in the PR are deployed, you want to add some test data to the system and run optional end-to-end tests to verify the basic functionality. You also want to inform the team if you have any issues. Once the system is up and running, the product owner is notified to proceed with the review.
This example has
deploy
: task to deploy the test system.prepare_test_data
: task to save sample data to the database.get_e2e_tests
: task to load end-to-end tests from the binary repository.run_tests
: task to execute smoke tests/preliminary tests if defined.send_error_report_to_team
: task to send the error to the team with a link to logs and traces.publish_test_results
: task to publish the test results and update the PR.notify_product_owner
: task to send a link to the verification system to the product owner.Once the deployment task is successfully completed, the prepapre_test_data
task will populate the review system with some data to view. At the same time, the get_e2e_tests
task looks up any end-to-end tests from a centralized test system, which may or may not have tests to run.
Both tasks have the deploy
task in their conditions with the state condition set to success or skipped by default.
Now that you want to run the tests, the get_e2e_tests
task is prepared. However, you can only do that after loading the test data with the get_e2e_tests
task. Therefore, the run_tests
action has conditions to run both the get_e2e_tests
and prepare_test_data
tasks, with the condition set to success or skipped (default). Given these conditions, you might not get any tests from your centralized system to run the run_tests
. This action should be skipped if there aren't any test results. You can add another custom condition to check the result of the get_e2e_tests
.
{{ result("get_e2e_tests").run_tests }}
The custom condition of a task is only checked if all the predecessor tasks are finished and fulfilled their state condition.
By default, the workflow would stop executing if this condition is not met, in other words, if run_tests
is false. However, if you want to skip this step and continue the workflow, you can change the else action from stop here (default) to skip.
To inform the team if the running of tests fails, you can add the send_error_report_to_team
task after the run_tests
task and set the state condition to error. This task will only run if the run_tests
task ends with an error.
Otherwise, you would like to publish the test results on the pull request when the tests are finished successfully. By adding the publish_test_results
task after the run_tests
task, by default, this will always run in case the run_tests
task was successful or skipped. However, you only want to publish test results if we run tests, so we set the state condition from success or skipped to success, meaning the publish step won't run if the run_tests
are skipped.
Lastly, you want to inform the product owner about the review system. This should happen only after any automated tests have finished, but in any case, with or without tests. Therefore, the notify_product_owner
task has its state condition for run_tests
set to any.
You can configure specific task behavior in the details pane on the right-hand side in the task options.
Task options
Turning on the toggle allows you to configure the Wait before option for the action. The Wait before option controls how long a task stays in the waiting state before being run. The default Wait is 0 seconds. You could also add a Jinja expression instead of a number.
Turning on the toggle lets you configure the Loop task option for the action. This option enables an action to be executed repeatedly by iterating through items in a list.
Configuration options
You can provide a static list or use an expression to reference, for example, the result of a preceding task.
If you turn on the loop option, the individual action execution results in a list. The result will always be a list, regardless of the length of the input list. Also, the task log will contain a concatenation of all individual action execution logs of the iterations.
To access the list item in the task input configuration for the current iteration, you can use the expression {{ _.item }}
, where the name is configurable. In the Run Javascript action, you can use an SDK to access the loop item.
Turning on the toggle allows you to configure the Retry on error option for the action when the action fails.
Configuration options
The default settings mean that the task retries the action two more times with a 30-second interval.
If the action is successful, the task succeeds too, and there will be no more retries. If none of the action retries are successful, the task will end in an error state.
Using the loop option on your task, you can configure the Only retry failed loops option.
Turning on the toggle allows you to configure the Adapt timeout option. You can set the Timeout this task (seconds) setting to limit how long a task is processed before it fails with an error. The default timeout is 15 minutes, up to seven days.
The Dynatrace runtime timeout is 120 seconds by default. Dynatrace runtime timeout limits a single action. You can't increase the timeout in Timeout this task (seconds) setting. A task may run multiple actions in a loop or retry and run longer than the individual actions. This overall runtime accounts for the task timeout.