Observe Argo CD deployment and application health with Dashboards and SDLC events

  • Latest Dynatrace
  • Tutorial
  • 5-min read
  • Preview

In this tutorial, you'll

  • Integrate Argo CD and Dynatrace.
  • Use Dashboards Dashboards to observe Argo CD deployments and application health.
  • Use this information to optimize deployments with Argo CD.

Below is an example of what your Argo CD dashboard could look like.

Concepts

Software Development Lifecycle (SDLC) events

SDLC events are events with a separate event kind in Dynatrace that follow a well-defined semantics for capturing data points from a software component's software development lifecycle. The SDLC event specification defines the semantics of those events.

Why were Argo CD notifications changed into SDLC events?

The main benefits are data normalization, tool agnosticism, and avoiding reliance on specific tools. As a result, Dashboards Dashboards, apps, and Workflows Workflows can build on SDLC events with well-defined properties rather than tool-specific details.

Target audience

This tutorial is intended for platform engineers who manage the Internal Development Platform (IDP), including Argo CD, in GitOps-based deployments.

Learning outcome

In this tutorial, you'll learn how to

  • Forward Argo CD notifications to Dynatrace.
  • Send Prometheus metrics to Dynatrace.
  • Normalize the ingested event data.
  • Use Dashboards Dashboards to analyze the data and identify opportunities for improvement.

Prerequisites

Install the Configuration As Code tool of your choice. Either install the Terraform CLI or install the Monaco CLI. Depending on the tool you choose, pick the correct setup section below.

Observe Argo CD deployment and application health with Dashboards and SDLC events

1. Setup: Prepare the configuration

  1. Create a new platform token with the following permissions and store it in a secure place:

    • Run apps: app-engine:apps:run.
    • View OpenPipeline configurations: settings:objects:read.
    • Edit OpenPipeline configurations: settings:objects:write.
    • Create and edit documents: document:documents:write.
    • View documents: document:documents:read.
  2. Clone the Dynatrace configuration as code sample repository using the following command.

    git clone https://github.com/Dynatrace/dynatrace-configuration-as-code-samples.git

2. Setup: Configuration as Code

You can choose between two options:

  • Terraform.
  • Monaco.

Set up Terraform.

  1. Prepare the Terraform configuration.

    The configuration consists of:

    1. Move to the argocd_observability_terraform directory with the following command.

      cd dynatrace-configuration-as-code-samples/argocd_observability_terraform

    2. Store the retrieved platform token in an environment variable.

      $env:DYNATRACE_PLATFORM_TOKEN='<YOUR_PLATFORM_TOKEN>'

    3. Store your Dynatrace environment URL in an environment variable. Make sure to replace <YOUR-DT-ENV-ID> with your Dynatrace environment ID, e.g. abc12345.

      $env:DYNATRACE_ENV_URL='https://<YOUR-DT-ENV-ID>.apps.dynatrace.com'

Static routing for OpenPipeline configuration for SDLC events

This configuration uses static routing, so there is no need to download and merge the dynamic routing configuration.

  1. Apply the Terraform configuration.

    Run this command to verify the provided Terraform configuration.

    terraform plan

    Run this command to apply the provided Terraform configuration.

    terraform apply

3. Setup: Create a Dynatrace access token

To receive events processed by OpenPipeline, you need an access token with the following OpenPipeline scopes:

  • openpipeline.events_sdlc.
  • openpipeline.events_sdlc.custom.

To generate an access token:

  1. Go to Access tokens Access Tokens.
  2. Select Generate new token.
  3. Enter a name for your token.
    Dynatrace doesn't enforce unique token names. You can create multiple tokens with the same name. Be sure to provide a meaningful name for each token you generate. Proper naming helps you to efficiently manage your tokens and perhaps delete them when they're no longer needed.
  4. Select the required scopes for the token.
  5. Select Generate token.
  6. Copy the generated token to the clipboard. Store the token in a password manager for future use.

    You can only access your token once upon creation. You can't reveal it afterward.

  1. Select these scopes:

    • OpenPipeline - Ingest Software Development Lifecycle Events (Built-in) (openpipeline.events_sdlc)
    • OpenPipeline - Ingest Software Development Lifecycle Events (Custom) (openpipeline.events_sdlc.custom)
  2. Save the generated token securely for subsequent steps. We refer to it as <YOUR-ACCESS-TOKEN>.

4. Setup: Configure Argo CD notifications

Argo CD notifications provide a flexible way to alert users about essential changes in the state of their applications managed by Argo CD. To configure the Argo CD notifications, you need to create a notification secret, apply the configuration, and subscribe applications to notifications.

  1. Create a notification secret.

    1. Update the argocd-notifications-secret with:

      apiVersion: v1
      kind: Secret
      metadata:
      name: argocd-notifications-secret
      stringData:
      dt-base-url: https://{your-environment-id}.live.dynatrace.com
      dt-access-token: <YOUR-ACCESS-TOKEN>
    2. Apply the configuration.

      kubectl apply -f <secret_file_name>.yaml -n argocd

  2. Create a notification template and trigger.

    1. If you don't have any notification configurations, create a new configuration map called argocd-notification-cm as shown below. Otherwise, extend your config map configuration by adding the example's service, template, and trigger sections.

      apiVersion: v1
      kind: ConfigMap
      metadata:
      name: argocd-notifications-cm
      data:
      service.webhook.dynatrace-webhook: |
      url: $dt-base-url
      headers:
      - name: "Authorization"
      value: Api-Token $dt-access-token
      - name: "Content-Type"
      value: "application/json; charset=utf-8"
      template.dynatrace-webhook-template: |
      webhook:
      dynatrace-webhook:
      method: POST
      path: /platform/ingest/custom/events.sdlc/argocd
      body: |
      {
      "app": {{toJson .app}}
      }
      trigger.dynatrace-webhook-trigger: |
      - when: app.status.operationState.phase in ['Succeeded'] and app.status.health.status in ['Healthy', 'Degraded']
      send: [dynatrace-webhook-template]
      - when: app.status.operationState.phase in ['Failed', 'Error']
      send: [dynatrace-webhook-template]
      - when: app.status.operationState.phase in ['Running']
      send: [dynatrace-webhook-template]

      Here is an explanation for the naming in the configuration.

      • dynatrace-webhook is the name of the service, $dt-access-token refers to the Dynatrace access token, and $dt-base-url is a reference to the Dynatrace event ingest endpoint stored in the argocd-notifications-secret secret.
      • dynatrace-webhook-template is the template's name, and dynatrace-webhook refers to the service created above.
      • dynatrace-webhook-trigger is the trigger's name, and dynatrace-webhook-template refers to the template created above.
    2. Apply the configuration with this command.

      kubectl apply -f <config_map_file_name>.yaml -n argocd

    3. Subscribe to notifications.

      Modify the annotations of the Argo CD application by using either the Argo CD UI or the Argo CD application definition with the following annotations:

      apiVersion: argoproj.io/v1alpha1
      kind: Application
      metadata:
      annotations:
      notifications.argoproj.io/subscribe.dynatrace-webhook-trigger.dynatrace-webhook: ""

      The added notifications.argoproj.io notification annotation subscribes the Argo CD application to the notification setup you created above.

5. Setup: Send Argo CD Prometheus metrics to Dynatrace

Argo CD exposes different sets of Prometheus metrics for different services. Configure your Argo CD services to expose this information so Dynatrace can collect it. You can use either Dynatrace ActiveGate, which is installed on the Kubernetes cluster that hosts Argo CD, or the Dynatrace OTel Collector.

To use Dynatrace ActiveGate

  1. Allow Prometheus metrics monitoring.

    1. Go to Kubernetes and select the monitored cluster with Argo CD installation.
    2. In the upper-right corner, go to > Connection settings.
    3. Choose Monitoring Settings.
    4. Allow Monitor annotated Prometheus exporters.
    5. Save.
  2. In your Argo CD installation namespace, add the following two annotations for each of the services listed in the table below. Replace {METRICS_PORT} with the corresponding port number.

    metrics.dynatrace.com/port: {METRICS_PORT}
    metrics.dynatrace.com/scrape: 'true'
    ServiceMetrics Port

    argocd-applicationset-controller

    8080

    argocd-metrics

    8082

    argocd-server-metrics

    8083

    argocd-repo-server

    8084

    argocd-notifications-controller-metrics

    9001

    argocd-dex-server

    5558

    View the histogram data ingest by going to Settings Settings > Metrics > Histograms. The Ingest complete explicit bucket histograms setting you need is already allowed.

6. Unlock enhanced deployment insights with Argo CD

Now that you've successfully configured Argo CD and Dynatrace, you can use Dashboards Dashboards and SDLC events to observe your Argo CD deployments.

Analyze

In Dynatrace, open the ArgoCD Application Lifecycle dashboard to

  • Investigate running syncs and hotspots of many sync operations.
  • Analyze the duration of sync operations.
  • See deployment status and application health.

To try out, go to Dynatrace Playground.

Optimize

Use these insights for the following improvement areas:

  • Increase CI/CD pipeline efficiency.

    Observing workflow executions lets you identify bottlenecks and inefficiencies in your CI/CD pipelines.

    Knowing about these bottlenecks and inefficiencies helps optimize build and deployment processes, leading to faster and more reliable releases.

  • Improve developer productivity.

    Automated pipelines reduce the manual effort required for repetitive tasks, such as running tests and checking coding standards. This automation allows developers to focus more on writing code and less on administrative tasks.

  • Get data-driven development insights. Analyzing telemetry data from CI/CD pipelines provides valuable insights into the development process. You can use the telemetry data to make informed decisions and continuously improve the development flows.

Continuous improvements

Check and adjust your CI/CD pipelines regularly to make sure they're running smoothly.

In Dynatrace, adjust the timeframe of the relevant dashboards to monitor the long-term impact of your improvements.

Call to action

We highly value your insights on pipeline observability. Your feedback is crucial in helping us enhance our tools and services. Visit the Dynatrace Community page to share your experiences, suggestions, and ideas directly on the Feedback channel for CI/CD Pipeline Observability.

Further reading

Related tags
Software DeliveryDashboardsDashboardsOpenPipelineOpenPipeline