Set up the Dynatrace Google Cloud log and metric integration on an existing GKE cluster

Dynatrace version 1.230+

As an alternative to the main deployment, where the deployment script runs in a new automatically created GKE Autopilot cluster, you can choose to run the deployment script on an existing standard GKE or GKE Autopilot cluster. In this scenario, you will set up Google Cloud monitoring for metrics and logs in Google Cloud Shell. During setup, a new Pub/Sub subscription will be created. GKE will run two containers: a metric forwarder and a log forwarder. After installation, you'll get metrics, logs, dashboards, and alerts for your configured services in Dynatrace.

For other deployment options, see Alternative deployment scenarios.

Dynatrace version 1.230+

This page describes how to install version 1.0 of the Google Cloud integration on a GKE cluster.

Limitations

Dynatrace Google Cloud log integration supports up to 8 GB of data processing per hour (with base resources—without scaling). With bigger loads, messages will start to be retained in the PubSub Subscription. To measure latency, look for these metrics: Oldest unacked message age and Unacked messages. For scaling recommendations, see the scaling guide below.

Dynatrace Google Cloud metric integration supports up to 50 Google Cloud projects with the standard deployment. To monitor larger environments, you need to enable metrics scope. See Monitor multiple Google Cloud projects - Large environments.

Prerequisites

To deploy the integration, you need to make sure the following requirements are met on the machine where you are running the installation.

  • Linux OS only

  • Internet access

  • GKE Cluster access

  • Dynatrace environment access

    You need to configure the Dynatrace endpoint (environment, cluster or ActiveGate URL) to which the GKE cluster should send metrics and logs. Make sure that you have direct network access or, if there is a proxy or any other component present in between, that communication is not affected.

Tools

You can deploy the Dynatrace GCP integration in Google Cloud Shell or in bash.

If you use bash, you need to install:

Google Cloud permissions

Running the deployment script requires a list of permissions. You need to create a custom role (see below) and use it to deploy dynatrace-gcp-monitor.

  1. Create a YAML file named dynatrace-gcp-monitor-helm-deployment-role.yaml with the following content:
title: Dynatrace GCP Monitor helm deployment role
description: Role for Dynatrace GCP Monitor helm and pubsub deployment
stage: GA
includedPermissions:
- container.clusters.get
- container.configMaps.create
- container.configMaps.delete
- container.configMaps.get
- container.configMaps.update
- container.deployments.create
- container.deployments.delete
- container.deployments.get
- container.deployments.update
- container.namespaces.create
- container.namespaces.get
- container.pods.get
- container.pods.list
- container.secrets.create
- container.secrets.delete
- container.secrets.get
- container.secrets.list
- container.secrets.update
- container.serviceAccounts.create
- container.serviceAccounts.delete
- container.serviceAccounts.get
- iam.roles.create
- iam.roles.list
- iam.roles.update
- iam.serviceAccounts.actAs
- iam.serviceAccounts.create
- iam.serviceAccounts.getIamPolicy
- iam.serviceAccounts.list
- iam.serviceAccounts.setIamPolicy
- pubsub.subscriptions.create
- pubsub.subscriptions.get
- pubsub.subscriptions.list
- pubsub.topics.attachSubscription
- pubsub.topics.create
- pubsub.topics.getIamPolicy
- pubsub.topics.list
- pubsub.topics.setIamPolicy
- pubsub.topics.update
- resourcemanager.projects.get
- resourcemanager.projects.getIamPolicy
- resourcemanager.projects.setIamPolicy
- serviceusage.services.enable
- serviceusage.services.get
- serviceusage.services.list
- serviceusage.services.use

Each group of permissions is used to handle the different resources included in the integration. Creation and access are for new resources, update is for reusing existing resources, and deletion is for uninstalling.

  • container.configMaps: for mapping secrets and other variables used by the containers.
  • container.deployments: for the K8s' deployment within the cluster (which includes the pods, containers, etc.).
  • container.namespaces: for the K8s namespace in which we are deploying the resources.
  • container.pods: for the K8s pods.
  • container.secrets: for the K8s secrets in which to store the data-sensitive variables.
  • container.serviceAccounts: for the SA to be taken in the K8s context.
  • iam.roles: for the necessary permissions for data collection.
  • iam.serviceAccounts: for the general context SA.
  • resourcemanager.projects: for handling the project in which we are deploying our integration.
  • serviceusage.services: for handling the services' APIs.
  • pubsub.subscriptions: for the PubSub subscription we are using to collect and ingest logs.
  • pubsub.topics: for the PubSub topic we are using to collect and ingest logs.
  1. Run the command below, replacing <your_project_ID> with the project ID where you want to deploy the Dynatrace integration.
gcloud iam roles create dynatrace_monitor.helm_deployment --project=<your_project_ID> --file=dynatrace-gcp-monitor-helm-deployment-role.yaml

Be sure to add this role to your Google Cloud user. For details, see Grant or revoke a single role.

Google Cloud settings

The location where you deploy the integration determines whether you need make any additional settings.

Deploy on an existing GKE Autopilot cluster

If you deploy the integration on an existing GKE Autopilot cluster, you don't need to make any additional settings.

Deploy on an existing GKE standard cluster

If you deploy the integration on an existing GKE standard cluster, you need to

Configure log export

  1. Run the following shell script in the Google Cloud project you've selected for deployment.

Be sure to replace <your-subscription-name> and <your-topic-name> with your own values.

wget https://raw.githubusercontent.com/dynatrace-oss/dynatrace-gcp-monitor/master/scripts/deploy-pubsub.sh
chmod +x deploy-pubsub.sh
./deploy-pubsub.sh --topic-name <your-topic-name> --subscription-name <your-subscription-name>
  1. Configure log export to send the desired logs to the Google Cloud Pub/Sub topic created above.

Dynatrace permissions

You need to create a token with a set of permissions.

  1. Go to Access tokens.
  2. Select Generate new token.
  3. Enter a name for your token.
  4. Under Template, select GCP Services Monitoring.
  5. Select Generate.
  6. Copy the generated token to the clipboard. Store the token in a password manager for future use.

Alternatively, you can create the token and add permissions manually.

Create an API token and enable the following permissions:

  • API v1:
    • Read configuration
    • Write configuration
  • API v2:
    • Ingest metrics
    • Read extensions
    • Write extensions
    • Read extensions monitoring configuration
    • Write extensions monitoring configuration
    • Read extensions environment configuration
    • Write extensions environment configuration
    • Ingest logs
    • Manage metadata of Hub items
    • Read Hub related data
    • Install and update Hub items

To monitor logs from multiple projects, you need to create Log Routing Sinks in each source project selecting as a destination for your main project (in which you also deployed the integration and the PubSub Topic and Subscription). For more information, see Route logs to supported destinations.

Log ingestion

  • If you are using Log Monitoring v1, enable the latest version of Dynatrace log monitoring

  • Determine where log ingestion will be performed, according to your deployment. This distinction is important when configuring the parameters for this integration.

    • For SaaS deployments: SaaS log ingest, where log ingestion is performed directly through the Cluster API. recommended

    • For Managed deployments: You can use an existing ActiveGate for log ingestion. For information on how to deploy it, see ActiveGate installation.

Because of GCP's implementation of Cloud Function 2nd gen, logs from those resources will be linked to the underlying Cloud Run instances. Both extensions will have to be enabled.

To learn more, visit Google Cloud Functions version comparison.

Install

Complete the steps below to finish your setup.

Download the Helm deployment package in Google Cloud Shell

wget -q "https://github.com/dynatrace-oss/dynatrace-gcp-monitor/releases/latest/download/helm-deployment-package.tar"; tar -xvf helm-deployment-package.tar; chmod +x helm-deployment-package/deploy-helm.sh

Configure parameter values

  1. The Helm deployment package contains a values.yaml file with the necessary configuration for this deployment. Go to helm-deployment-package/dynatrace-gcp-monitorand edit the values.yaml file, setting the required and optional parameter values as follows.

    You might want to store this file somewhere for future updates, since it will be needed in case of redeployments. Also, keep in mind that its schema can change. In such case, you should use the new file and only copy over the parameter values.

    Parameter name
    Description
    Default value

    parallelProcesses

    optional Number of parallel processes to run the whole log monitoring loop

    1

    numberOfConcurrentLogForwardingLoops

    optional Number of workers pulling logs from pubsub concurrently and pushing them to Dynatrace

    5

    numberOfConcurrentMessagePullCoroutines

    optional Number of concurrent coroutines to pull messages from pub/sub

    10

    numberOfConcurrentPushCoroutines

    optional Number of concurrent coroutines to push messages to Dynatrace

    5

    gcpProjectId

    required The ID of the GCP project you've selected for deployment.

    Your current project ID

    deploymentType

    required Leave to all.

    all

    dynatraceAccessKey

    dynatraceUrl

    required For SaaS log/metric ingestion, it's your environment URL (https://<your-environment-id>.live.dynatrace.com).
    For Managed log/metric ingestion, it's your cluster URL (https://<your_cluster_IP_or_hostname>/e/<your_environment_ID>).
    For Managed log/metric ingestion with an existing ActiveGate, it's the URL of your ActiveGate (https://<your_activegate_IP_or_hostname>:9999/e/<your_environment_ID>).
    Note: To determine <your-environment-id>, see environment ID.

    logsSubscriptionId

    required The ID of your log Sink Pub/Sub subscription. For details, see Configure log export.

    dynatraceLogIngestUrl

    optional You can set it if you want to ingest logs separately from metrics.
    For SaaS log ingestion, it's your environment URL (https://<your_environment_ID>.live.dynatrace.com)
    For Managed log ingestion with an existing ActiveGate, it's the URL of your ActiveGate (https://<your_activegate_IP_or_hostname>:9999/e/<your_environment_ID>)
    Note: To determine <your-environment-id>, see environment ID.

    dynatraceAccessKeySecretName

    optional You can specify the key to fetch the endpoint from GCP Secret Manager, instead of using dynatraceAccessKey.

    dynatraceUrlSecretName

    optional You can specify the key to fetch the endpoint from GCP Secret Manager, instead of using dynatraceUrl.

    dynatraceLogIngestUrlSecretName

    optional You can specify the key to fetch the endpoint from GCP Secret Manager, instead of using dynatraceLogIngestUrl.

    requireValidCertificate

    optional If set to true, Dynatrace requires the SSL certificate of your Dynatrace environment. For SaaS log ingestion, we recommend leaving the default value. For Managed log ingestion with a new ActiveGate, we recommend setting this value to false.

    true

    selfMonitoringEnabled

    optional Send custom metrics to GCP to quickly diagnose if dynatrace-gcp-monitor processes and sends metrics/logs to Dynatrace properly. For details, see Self-monitoring metrics for the Dynatrace GCP integration.

    false

    serviceAccount

    optional Name of the service account to be created.

    dockerImage

    optionalDynatrace GCP Monitor docker image. We recommend using the default value, but you can adapt it if needed.

    dynatrace/dynatrace-gcp-monitor:v1-latest

    logIngestContentMaxLength

    optional The maximum content length of a log event. Should be the same as or lower than the setting on your Dynatrace environment.

    8192

    logIngestAttributeValueMaxLength

    optional The maximum length of the log event attribute value. If it exceeds the server limit, content will be truncated.

    250

    logIngestRequestMaxEvents

    optional The maximum number of log events in a single payload to the logs ingestion endpoint. If it exceeds the server limit, payload will be rejected with code 413.

    5000

    logIngestRequestMaxSize

    optional The maximum size in bytes of a single payload to the logs ingestion endpoint. If it exceeds the server limit, payload will be rejected with code 413.

    1048576

    logIngestEventMaxAgeSeconds

    optional Determines the maximum age of a forwarded log event. Should be the same as or lower than the setting on your Dynatrace environment.

    86400

    printMetricIngestInput

    optional If set to true, the GCP Monitor outputs the lines of metrics to stdout.

    false

    serviceUsageBooking

    optional Service usage booking is used for metrics and determines a caller-specified project for quota and billing purposes. If set to source, monitoring API calls are booked in the project where the Kubernetes container is running. If set to destination, monitoring API calls are booked in the project that is monitored. For details, see Monitor multiple GCP projects - Standard environments - Step 4.

    source

    useProxy

    optional Depending on the value you set for this flag, the GCP Monitor will use the following proxy settings: Dynatrace (set to DT_ONLY), GCP API (set to GCP_ONLY), or both (set to ALL).

    By default, proxy settings are not used.

    httpProxy

    optional The proxy HTTP address; use this flag in conjunction with USE_PROXY.

    httpsProxy

    optional The proxy HTTPS address; use this flag in conjunction with USE_PROXY.

    gcpServicesYaml

    optional Configuration file for GCP services.

    queryInterval

    optional Metrics polling interval in minutes. Allowed values: 1 - 6

    3

    vpcNetwork

    optional Existing VPC Network where the autopilot cluster will be deployed. Shared VPC is not supported.

    default

    useCustomSubnet

    optional Set to true only if you want to use a custom mode VPC network.
    If set to true, you'll need to pass the customSubnetName parameter.

    false

    customSubnetName

    required Only if useCustomSubnet is set to true.
    Set this value to the subnet name you want to deploy the Google Cloud Monitor in.

    ""

    scopingProjectSupportEnabled

    optional Set to true when metrics scope is configured, so metrics will be collected from all projects added to the metrics scope. For details, see Monitor multiple GCP projects - Large environments.

    false

    excludedProjects

    optional Comma-separated list of projects to be excluded from monitoring (for example, <project_A>,<project_B>)

    excludedMetricsAndDimensions

    optional Yaml formatted list of metrics and their dimensions to be excluded for monitoring.

    metricAutodiscovery

    optional If set to true, the GCP Monitor will run metric auto-discovery mode, expanding your options for selecting metrics to monitor. For more information, see Monitor GCP projects using auto-discovery.

    false

  2. Choose which services you want Dynatrace to monitor.

    By default, the Dynatrace Google Cloud integration starts monitoring a set of selected services. Go to Google Cloud supported services for a list of supported services.

For DDU consumption information, see Monitoring consumption.

Connect your Kubernetes cluster

To connect to your existing GKE standard cluster or existing GKE Autopilot cluster, run the command below, making sure to replace

  • <cluster> with your cluster name
  • <region> with the region where your cluster is running
  • <project> with the project ID where your cluster is running
gcloud container clusters get-credentials <cluster> --region <region> --project <project>

For details, see Configuring cluster access for kubectl.

Run the deployment script

The deployment script will create an IAM service account with the necessary roles and deploy dynatrace-gcp-monitor to your GKE cluster. The latest versions of Google Cloud extensions will be uploaded.

You have two options:

  • Run the deployment script without parameters if you want to use the default values provided (dynatrace-gcp-monitor-sa for the IAM service account name and dynatrace_monitor for the IAM role name prefix):
cd helm-deployment-package
./deploy-helm.sh
  • Run the deployment script with parameters if you want to set your own values (be sure to replace the placeholders with your desired values):
cd helm-deployment-package
./deploy-helm.sh [--role-name <role-to-be-created/updated>]

To keep the existing versions of present extensions and install the latest versions for the rest of the selected extensions, if they are not present, run the command below instead.

cd helm-deployment-package
./deploy-helm.sh --without-extensions-upgrade

Verify installation

To check whether installation was successful

  1. Check if the container is running.

    After the installation, it may take couple of minutes until the container is up and running.

    kubectl -n dynatrace get pods
  2. Check the container logs for errors or exceptions. You have two options:

  1. Check if dashboards are imported.

    Go to Dashboards or Dashboards Classic (latest Dynatrace) and filter by Tag for Google Cloud. A number of dashboards for Google Cloud Services should be available.

Choose services for metrics monitoring

Services enabled by default

Monitoring of following services will be enabled during deployment of Google Cloud Monitor:

There are more service integrations available, but need to be enabled. Go to Google Cloud supported services for a list of supported services. Next section describes how to manage them. For an alternative approach, consider leveraging auto-discovery to extend your metric coverage.

Manage enabled services

You can manage enabled services via Dynatrace Hub.

Filter for "gcp"—you'll find annotations in the results for items that are already available in your environment.

To enable a new service, select it in Hub and then install it.

You can also disable a service via Dynatrace Hub.

To see if the services need updating, open them in Hub and check release notes. The updates can include new metrics, new assets like dashboards, or other changes.

All changes to enabled services are applied to Google Cloud Monitor within few minutes.

Feature sets & available metrics

To see what metrics are included for specific service, check Google Cloud supported services. By default, only defaultMetrics feature set is enabled. To enable additional feature sets, you have to uncomment them in values.yaml file and redeploy whole Google Cloud Monitor.

Current configuration of feature sets can be found in cluster's ConfigMap named dynatrace-gcp-function-config.

Advanced scope management

To further refine monitoring scope, you can use filter_conditions field in values.yaml file. This requires Google Cloud Monitor to be redeployed. See Google Cloud Monitoring filters for syntax.

Example:

filter_conditions:
resource.labels.location = "us-central1-c" AND resource.labels.namespace_name = "dynatrace"

Enable alerting

To activate alerting, you need to enable metric events for alerting in Dynatrace.

To enable metric events

  1. Go to Settings.
  2. Select Anomaly detection > Metric events.
  3. Filter for Google Cloud alerts and turn on On/Off for the alerts you want to activate.

View metrics and logs

After deploying the integration, depending on your deployment type, you can:

  • See metrics from monitored services: go to Metrics and filter by gcp.
  • View and analyze Google Cloud logs: in Dynatrace, go to Logs or Logs & Events (latest Dynatrace) and, to look for Google Cloud logs, filter by cloud.provider: gcp.

Change deployment settings

Change parameters from values.yaml

To load a new values.yaml file, you need to upgrade your Helm release.

To update your Helm release

  1. Find out what helm release version you're using.

    helm ls -n dynatrace
  2. Run the command below, making sure to replace <your-helm-release> with the value from the previous step.

    helm upgrade <your-helm-release> dynatrace-gcp-monitor -n dynatrace

For details, see Helm upgrade.

Change deployment type

To change the deployment type (all, metrics, or logs)

  1. Find out what Helm release version you're using.

    helm ls -n dynatrace
  2. Uninstall the release.

    Be sure to replace <your-helm-release> with the release name from the previous output.

    helm uninstall <your-helm-release> -n dynatrace
  3. Edit deploymentType in values.yaml with the new value and save the file.

  4. Run the deployment command again. For details, see Run the deployment script.

Verification

To investigate potential deployment and connectivity issues

  1. Verify installation
  2. Enable self-monitoring optional
  3. Check the dynatrace_gcp_<date_time>.log log file created during the installation process.
  • This file will be created each time the installation script runs.
  • The debug information won't contain sensitive data such as the Dynatrace access key.
  • If you are contacting a Dynatrace product expert via live chat:
    • Make sure to provide the dynatrace_gcp_<date_time>.log log file described in the previous step.
    • Provide version information.
      • For issues during installation, check the version.txt file.
      • For issues during runtime, check container logs.

Scaling guide for logs

The default container with 1.25vCPU and 1Gi (with default configuration) can handle 8 GB of log throughput per hour. Achieving more throughput requires allocating more resources to the container (scale up), increasing the number of container replicas (scale out), and changing configuration numbers to use allocated resources efficiently. All config variables can be found and changed in dynatrace-gcp-monitor-config.

The following table presents tested configuration and achieved throughput with scaled up&out containers:

Achieved throughput
Machine resources
Replica sets
Config variable values

~8MB/s => ~480MB/min

4vCPU 4Gi RAM

1

PARALLEL_PROCESSES=4,
NUMBER_OF_CONCURRENT_MESSAGE_PULL_COROUTINES = 30,
NUMBER_OF_CONCURRENT_PUSH_COROUTINES=20

~25MB/s => ~1.5GB/min => ~2TB/day

4vCPU 4Gi RAM

4

PARALLEL_PROCESSES=4,
NUMBER_OF_CONCURRENT_MESSAGE_PULL_COROUTINES = 30,
NUMBER_OF_CONCURRENT_PUSH_COROUTINES=20

~46MB/s => ~2.7GB/min => ~4TB/day

4vCPU 4Gi RAM

6

PARALLEL_PROCESSES=4,
NUMBER_OF_CONCURRENT_MESSAGE_PULL_COROUTINES = 30,
NUMBER_OF_CONCURRENT_PUSH_COROUTINES=20

Autoscaling guide for logs

Autoscaling works only for logs type of deployment, not all.

We recommend manually scaling the container to have a 4vCPU 4Gi machine and then enabling autoscaling.

GCP provides autoscaling of containers in both directions: horizontal and vertical. However, Dynatrace recommends only horizontal scaling.

If you have a 4vCPU 4Gi machine, you can enable autoscaling horizontally. However, we don't recommend scaling horizontally with the base resources of the container (1.25vCPU, 1Gi). It hasn't been proven to be efficient during testing. One 4vCPU machine does better than four 1vCPU machines. To enable autoscaling horizontally, use the horizontal autoscaling command:

kubectl autoscale deployment dynatrace-gcp-monitor --namespace dynatrace --cpu-percent=90 --min=1 --max=6

Autoscaling is recommended only when you have a minimum of 450 MB/min throughput and can provide a 4vCPU 4Gi RAM machine. Autoscaling is only scaling out, not scaling the machine up.

We don't recommend scaling vertically because every time a machine is scaled up, an environment variable needs to be changed to create more processes corresponding to machine cores.

Uninstall

  1. Find out what Helm release version you're using.
helm ls -n dynatrace
  1. Uninstall the release.

Be sure to replace <your-helm-release> with the release name from the previous output.

helm uninstall <your-helm-release> -n dynatrace

Alternatively, you can delete the namespace.

kubectl delete namespace dynatrace
  1. To remove all monitoring assets (such as dashboards and alerts) from Dynatrace, you need to remove all Google Cloud extensions.

You can find and delete relevant extensions via Dynatrace Hub.

Make sure to uninstall the following resources manually:
  • The initial Role created and attached to the Service Account that you used to deploy the integration.
  • The PubSub Topic.
  • The PubSub Subscription.
  • The LogRoute Sink.

Monitoring consumption

Metric ingestion

All cloud services consume DDUs. The amount of DDU consumption per service instance depends on the number of monitored metrics and their dimensions (each metric dimension results in the ingestion of 1 data point; 1 data point consumes 0.001 DDUs). For details, see Extending Dynatrace (Davis data units).

Log ingestion

DDU consumption applies to cloud Log Monitoring. See DDUs for Log Monitoring for details.