Set up the Dynatrace GCP metric integration on a GKE cluster
Dynatrace version 1.230+
As an alternative to the main deployment, that provides Google Cloud Platform monitoring for both metrics and logs, you can choose to set up monitoring for metrics only. In this scenario, you'll run the deployment script in Google Cloud Shell. Instructions will depend on the location where you want the deployment script to run:
-
On a new GKE Autopilot cluster created automatically recommended
-
On an existing GKE standard or GKE Autopilot cluster
During setup, GKE will run a metric forwarder container. After installation, you'll get metrics, dashboards, and alerts for your configured services in Dynatrace.
For other deployment options, see Alternative deployment scenarios.
This page describes how to install version 1.0 of the GCP integration on a GKE cluster.
- If you already have an earlier version installed, you need to migrate.
Limitations
Dynatrace GCP metric integration supports up to 50 GCP projects with the standard deployment. To monitor larger environments, you need to enable metrics scope. See Monitor multiple GCP projects - Large environments.
Prerequisites
To deploy the integration, you need to make sure the following requirements are met on the machine where you are running the installation.
-
Linux OS only
-
Internet access
-
GKE Cluster access
-
Dynatrace environment access
You need to configure the Dynatrace endpoint (environment, cluster or ActiveGate URL) to which the GKE autopilot cluster should send metrics and logs. Make sure that you have direct network access or, if there is a proxy or any other component present in between, that communication is not affected.
Tools
You can deploy the Dynatrace GCP integration in Google Cloud Shell or in bash.
If you use bash, you need to install:
GCP permissions
Running the deployment script requires a list of permissions. You need to create a custom role (see below) and use it to deploy dynatrace-gcp-monitor
.
- Create a YAML file named
dynatrace-gcp-monitor-helm-deployment-role.yaml
with the following content:
1title: Dynatrace GCP Monitor helm deployment role2description: Role for Dynatrace GCP Monitor helm and pubsub deployment3stage: GA4includedPermissions:5 - container.clusters.get6 - container.configMaps.create7 - container.configMaps.delete8 - container.configMaps.get9 - container.configMaps.update10 - container.deployments.create11 - container.deployments.delete12 - container.deployments.get13 - container.deployments.update14 - container.namespaces.create15 - container.namespaces.get16 - container.pods.get17 - container.pods.list18 - container.secrets.create19 - container.secrets.delete20 - container.secrets.get21 - container.secrets.list22 - container.secrets.update23 - container.serviceAccounts.create24 - container.serviceAccounts.delete25 - container.serviceAccounts.get26 - iam.roles.create27 - iam.roles.list28 - iam.roles.update29 - iam.serviceAccounts.actAs30 - iam.serviceAccounts.create31 - iam.serviceAccounts.getIamPolicy32 - iam.serviceAccounts.list33 - iam.serviceAccounts.setIamPolicy34 - resourcemanager.projects.get35 - resourcemanager.projects.getIamPolicy36 - resourcemanager.projects.setIamPolicy37 - serviceusage.services.enable38 - serviceusage.services.get39 - serviceusage.services.list
Each group of permissions is used to handle the different resources included in the integration. Creation and access are for new resources, update is for reusing existing resources, and deletion is for uninstalling.
container.configMaps: for mapping secrets and other variables used by the containers.
container.deployments: for the K8s' deployment within the cluster (which includes the pods, containers, etc.).
container.namespaces: for the K8s namespace in which we are deploying the resources.
container.pods: for the K8s pods.
container.secrets: for the K8s secrets in which to store the data-sensitive variables.
container.serviceAccounts: for the SA to be taken in the K8s context.
iam.roles: for the necessary permissions for metrics collection.
iam.serviceAccounts: for the general context SA.
resourcemanager.projects: for handling the project in which we are deploying our integration.
serviceusage.services: for handling the services' APIs.
- Run the command below, replacing
<your_project_ID>
with the project ID where you want to deploy the Dynatrace integration.
1gcloud iam roles create dynatrace_monitor.helm_deployment --project=<your_project_ID> --file=dynatrace-gcp-monitor-helm-deployment-role.yaml
Be sure to add this role to your GCP user. For details, see Grant or revoke a single role.
GCP settings
The location where you deploy the integration determines whether you need to change additional settings.
Deploy on a GKE Autopilot cluster
If you deploy the integration on an existing GKE Autopilot cluster or on a new Autopilot cluster that will be automatically created by the deployment script, you don't need to make any additional settings.
Deploy on a GKE standard cluster
If you deploy the integration on an existing GKE standard cluster, you need to:
Dynatrace permissions
- Create an API token and enable the following permissions:
- API v1:
- Read configuration
- Write configuration
- API v2:
- Ingest metrics
- Read extensions
- Write extensions
- Read extension monitoring configurations
- Write extension monitoring configurations
- Read extension environment configurations
- Write extension environment configurations
- API v1:
Install
Complete the steps below to finish your setup.
Download the Helm deployment package in Google Cloud Shell
Configure parameter values
Connect your Kubernetes cluster
Run the deployment script
Download the Helm deployment package in Google Cloud Shell
1wget -q "https://github.com/dynatrace-oss/dynatrace-gcp-monitor/releases/latest/download/helm-deployment-package.tar"; tar -xvf helm-deployment-package.tar; chmod +x helm-deployment-package/deploy-helm.sh
Configure parameter values
-
The Helm deployment package contains a
values.yaml
file with the necessary configuration for this deployment. Go tohelm-deployment-package/dynatrace-gcp-monitor
and edit thevalues.yaml
file, setting the required and optional parameter values as follows.You might want to store this file somewhere for future updates, since it will be needed in case of redeployments. Also, keep in mind that its schema can change. In such case, you should use the new file and only copy over the parameter values.
Parameter name Description Default value gcpProjectId
required The ID of the GCP project you've selected for deployment. Your current project ID deploymentType
required Set to 'metrics'. all
dynatraceAccessKey
required Your Dynatrace API token with the required permissions. dynatraceUrl
required For SaaS metric ingestion, it's your environment URL ( https://<your-environment-id>.live.dynatrace.com
).
For Managed metric ingestion, it's your cluster URL (https://<your_cluster_IP_or_hostname>/e/<your_environment_ID>
).
For Managed metric ingestion with an existing ActiveGate, it's the URL of your ActiveGate (https://<your_activegate_IP_or_hostname>:9999/e/<your_environment_ID>
).
Note: To determine<your-environment-id>
, see environment ID.requireValidCertificate
optional If set to true
, Dynatrace requires the SSL certificate of your Dynatrace environment.true
selfMonitoringEnabled
optional Send custom metrics to GCP to quickly diagnose if dynatrace-gcp-monitor
processes and sends metrics to Dynatrace properly. For details, see Self-monitoring metrics for the Dynatrace GCP integration.false
serviceAccount
optional Name of the service account to be created. dockerImage
optionalDynatrace GCP Monitor docker image. We recommend using the default value, but you can adapt it if needed. dynatrace/dynatrace-gcp-monitor:v1-latest
printMetricIngestInput
optional If set to true
, the GCP Monitor outputs the lines of metrics to stdout.false
serviceUsageBooking
optional Service usage booking is used for metrics and determines a caller-specified project for quota and billing purposes. If set to source
, monitoring API calls are booked in the project where the Kubernetes container is running. If set todestination
, monitoring API calls are booked in the project that is monitored. For details, see Monitor multiple GCP projects - Standard environments - Step 4.source
useProxy
optional Depending on the value you set for this flag, the GCP Monitor will use the following proxy settings: Dynatrace (set to DT_ONLY
), GCP API (set toGCP_ONLY
), or both (set toALL
).By default, proxy settings are not used. httpProxy
optional The proxy HTTP address; use this flag in conjunction with USE_PROXY
.httpsProxy
optional The proxy HTTPS address; use this flag in conjunction with USE_PROXY
.gcpServicesYaml
optional Configuration file for GCP services. queryInterval
optional Metrics polling interval in minutes. Allowed values: 1
-6
3
scopingProjectSupportEnabled
optional Set to true
when metrics scope is configured, so metrics will be collected from all projects added to the metrics scope. For details, see Monitor multiple GCP projects - Large environments.false
excludedProjects
optional Comma-separated list of projects to be excluded from monitoring (for example, <project_A>,<project_B>
) -
Choose which services you want Dynatrace to monitor.
By default, the Dynatrace GCP integration starts monitoring a set of selected services. Go to Google Cloud Platform supported service metrics for a list of supported services.
For DDU consumption information, see Monitoring consumption.
Connect your Kubernetes cluster
- If you want to have a new GKE Autopilot cluster created by the deployment script, add
--create-autopilot-cluster
to the script. Setting up a connection to the cluster will happen automatically in this case and you can proceed to step 4. If you run the deployment script on an existing standard GKE or GKE Autopilot cluster, you can connect to your cluster from the GCP console or via terminal. Follow the instructions below.
Run the deployment script
- If you run the deployment script on an existing standard GKE or GKE Autopilot cluster, the deployment script will create an IAM service account with the necessary roles and deploy
dynatrace-gcp-monitor
to your Kubernetes cluster. - If you run the deployment script with the
--create-autopilot-cluster
option, the deployment script will automatically create the new GKE Autopilot cluster and deploydynatrace-gcp-monitor
to it.
To run the deployment script, follow the instructions below.
Verify installation
To check whether installation was successful
-
Check if the container is running.
After the installation, it may take a couple of minutes before the container is up and running.
1kubectl -n dynatrace get pods -
Check the container logs for errors or exceptions. You have two options:
-
Check if dashboards are imported.
In the Dynatrace menu, go to Dashboards and filter by Tag for
Google Cloud
. A number of dashboards for Google Cloud Services should be available.
Choose services for metrics monitoring
Services enabled by default
Monitoring of following services will be enabled during deployment of GCP Monitor:
There are more service integrations available, but need to be enabled. Go to Google Cloud Platform supported service metrics for a list of supported services. Next section describes how to manage them.
Manage enabled services
You can manage enabled services through your Dynatrace Hub (Dynatrace UI > Manage > Dynatrace Hub).
Filter for gcp
in the hub. Tiles with In environment
label or New version available
label are already enabled for metrics monitoring.
To enable new service, open it in the hub and press Add to environment
.
To disable a service, open it in the hub, go to Configuration
tab and remove all loaded versions (ones with thrash bin icon). Make sure to remove all, since if you have been updating specific service, it will keep reverting to previous versions.
Services with New version available
label can be updated. Open them in hub and check release notes. The updates can bring new metrics, new assets like dashboards etc. To update a service, press Update extension
button in release notes box.
All changes to enabled services are applied to GCP Monitor within few minutes.
Feature sets & available metrics
To see what metrics are included for specific service, check Google Cloud Platform supported service metrics. By default, only defaultMetrics
feature set is enabled. To enable additional feature sets, you have to uncomment them in values.yaml
file and redeploy whole GCP Monitor.
Current configuration of feature sets can be found in cluster's ConfigMap named dynatrace-gcp-function-config
.
Advanced scope management
To further refine monitoring scope, you can use filter_conditions
field in values.yaml
file. This requires GCP Monitor to be redeployed. See GCP Monitoring filters for syntax.
Example:
1filter_conditions:2resource.labels.location = "us-central1-c" AND resource.labels.namespace_name = "dynatrace"
Enable alerting
To activate alerting, you need to enable metric events for alerting in Dynatrace.
To enable metric events
- In the Dynatrace menu, go to Settings.
- In Anomaly detection, select Metric events.
- Filter for GCP alerts and turn on On/Off for the alerts you want to activate.
View metrics
After deploying the integration, you can see metrics from monitored services (in the Dynatrace menu, go to Metrics and filter by gcp
).
Change deployment settings
Change parameters from values.yaml
To load a new values.yaml
file, you need to upgrade your Helm release.
To update your Helm release
-
Find out what Helm release version you're using.
1helm ls -n dynatrace -
Run the command below, making sure to replace
<your-helm-release>
with the value from the previous step.1helm upgrade <your-helm-release> dynatrace-gcp-monitor -n dynatrace
For details, see Helm upgrade.
Change deployment type
To change the deployment type (all
, metrics
, or logs
)
-
Find out what helm release version you're using.
1helm ls -n dynatrace -
Uninstall the release.
Be sure to replace
<your-helm-release>
with the release name from the previous output.1helm uninstall <your-helm-release> -n dynatrace -
Edit
deploymentType
invalues.yaml
with the new value and save the file. -
Run the deployment command again. For details, see Run the deployment script.
Verification
To investigate potential deployment and connectivity issues
- Verify installation
- Enable self-monitoring optional
- Check the
dynatrace_gcp_<date_time>.log
log file created during the installation process.
This file will be created each time the installation script runs.
The debug information won't contain sensitive data such as the Dynatrace access key.
- If you are contacting a Dynatrace product expert via live chat:
- Make sure to provide the
dynatrace_gcp_<date_time>.log
log file described in the previous step. - Provide version information.
- For issues during installation, check the
version.txt
file. - For issues during runtime, check container logs.
- For issues during installation, check the
- Make sure to provide the
Uninstall
Find out what Helm release version you're using.
1helm ls -n dynatrace
Uninstall the release.
Be sure to replace <your-helm-release>
with the release name from the previous output.
1helm uninstall <your-helm-release> -n dynatrace
Alternatively, you can delete the namespace.
1kubectl delete namespace dynatrace
To remove all monitoring assets (dashboards, alerts, etc) from Dynatrace, you need to remove all GCP extensions.
To remove an extension
- In the Dynatrace menu, go to Extensions and search for the GCP extensions.
- Select an extension you want to remove, and then select the trash icon in the Actions column to remove it.
Repeat the procedure until you remove all GCP extensions.
Make sure to uninstall the initial Role created and attached to the Service Account that you used to deploy the integration.
Monitoring consumption
All cloud services consume DDUs. The amount of DDU consumption per service instance depends on the number of monitored metrics and their dimensions (each metric dimension results in the ingestion of 1 data point; 1 data point consumes 0.001 DDUs). For details, see Extending Dynatrace (Davis data units).