Dynatrace Google Cloud integration leverages data collected from the Google Operation API to constantly monitor health and performance of Google Cloud Services. While combining all relevant data into dashboards, it also enables alerting and event tracking.
After integration, Dynatrace automatically monitors a number of preset Google Cloud services and feature sets (metrics). Besides these, you can add more services or feature sets to monitoring. For details, see Add or remove services.
For a list of feature sets available for this service, see Metric table.
After deploying the integration, you can see metrics from monitored services in the Metrics browser, Data Explorer, and your dashboard tiles.
The following feature sets are available for Google Cloud Vertex AI.
Feature set | Name | Unit | GCP metric identifier |
---|---|---|---|
vertexai_deployment_resource_pool/default_metrics | Accelerator duty cycle | Percent | aiplatform_googleapis_com/prediction/online/deployment_resource_pool/accelerator/duty_cycle |
vertexai_deployment_resource_pool/default_metrics | Accelerator memory usage | Byte | aiplatform_googleapis_com/prediction/online/deployment_resource_pool/accelerator/memory/bytes_used |
vertexai_deployment_resource_pool/default_metrics | CPU utilization | Percent | aiplatform_googleapis_com/prediction/online/deployment_resource_pool/cpu/utilization |
vertexai_deployment_resource_pool/default_metrics | Memory usage | Byte | aiplatform_googleapis_com/prediction/online/deployment_resource_pool/memory/bytes_used |
vertexai_deployment_resource_pool/default_metrics | Network bytes received | Byte | aiplatform_googleapis_com/prediction/online/deployment_resource_pool/network/received_bytes_count |
vertexai_deployment_resource_pool/default_metrics | Network bytes sent | Byte | aiplatform_googleapis_com/prediction/online/deployment_resource_pool/network/sent_bytes_count |
vertexai_deployment_resource_pool/default_metrics | Replica count | Count | aiplatform_googleapis_com/prediction/online/deployment_resource_pool/replicas |
vertexai_deployment_resource_pool/default_metrics | Replica target | Count | aiplatform_googleapis_com/prediction/online/deployment_resource_pool/target_replicas |
vertexai_endpoint/default_metrics | Accelerator duty cycle | Percent | aiplatform_googleapis_com/prediction/online/accelerator/duty_cycle |
vertexai_endpoint/default_metrics | Accelerator memory usage | Byte | aiplatform_googleapis_com/prediction/online/accelerator/memory/bytes_used |
vertexai_endpoint/default_metrics | CPU utilization | Percent | aiplatform_googleapis_com/prediction/online/cpu/utilization |
vertexai_endpoint/default_metrics | Number of online prediction errors | Count | aiplatform_googleapis_com/prediction/online/error_count |
vertexai_endpoint/default_metrics | Memory usage | Byte | aiplatform_googleapis_com/prediction/online/memory/bytes_used |
vertexai_endpoint/default_metrics | Network bytes received | Byte | aiplatform_googleapis_com/prediction/online/network/received_bytes_count |
vertexai_endpoint/default_metrics | Network bytes sent | Byte | aiplatform_googleapis_com/prediction/online/network/sent_bytes_count |
vertexai_endpoint/default_metrics | Number of online predictions | Count | aiplatform_googleapis_com/prediction/online/prediction_count |
vertexai_endpoint/default_metrics | Prediction latencies | MilliSecond | aiplatform_googleapis_com/prediction/online/prediction_latencies |
vertexai_endpoint/default_metrics | Private endpoint prediction latencies | MilliSecond | aiplatform_googleapis_com/prediction/online/private/prediction_latencies |
vertexai_endpoint/default_metrics | Private endpoint response count | Count | aiplatform_googleapis_com/prediction/online/private/response_count |
vertexai_endpoint/default_metrics | Replica count | Count | aiplatform_googleapis_com/prediction/online/replicas |
vertexai_endpoint/default_metrics | Response count | Count | aiplatform_googleapis_com/prediction/online/response_count |
vertexai_endpoint/default_metrics | Replica target | Count | aiplatform_googleapis_com/prediction/online/target_replicas |
vertexai_feature_online_store/feature_store | Request count | Count | aiplatform_googleapis_com/featureonlinestore/online_serving/request_count |
vertexai_feature_online_store/feature_store | Response bytes count | Count | aiplatform_googleapis_com/featureonlinestore/online_serving/serving_bytes_count |
vertexai_feature_online_store/feature_store | Request latency | MilliSecond | aiplatform_googleapis_com/featureonlinestore/online_serving/serving_latencies |
vertexai_feature_online_store/feature_store | Running sycs | Count | aiplatform_googleapis_com/featureonlinestore/running_sync |
vertexai_feature_online_store/feature_store | Serving data ages | Second | aiplatform_googleapis_com/featureonlinestore/serving_data_ages |
vertexai_feature_online_store/feature_store | Serving data by synced time | Count | aiplatform_googleapis_com/featureonlinestore/serving_data_by_sync_time |
vertexai_feature_online_store/feature_store | CPU load | Percent | aiplatform_googleapis_com/featureonlinestore/storage/bigtable_cpu_load |
vertexai_feature_online_store/feature_store | CPU load (hottest node) | Percent | aiplatform_googleapis_com/featureonlinestore/storage/bigtable_cpu_load_hottest_node |
vertexai_feature_online_store/feature_store | Node count | Count | aiplatform_googleapis_com/featureonlinestore/storage/bigtable_nodes |
vertexai_feature_online_store/feature_store | Optimized node count | Count | aiplatform_googleapis_com/featureonlinestore/storage/optimized_nodes |
vertexai_feature_online_store/feature_store | Bytes stored | Byte | aiplatform_googleapis_com/featureonlinestore/storage/stored_bytes |
vertexai_feature_store/feature_store | CPU load | Percent | aiplatform_googleapis_com/featurestore/cpu_load |
vertexai_feature_store/feature_store | CPU load (hottest node) | Percent | aiplatform_googleapis_com/featurestore/cpu_load_hottest_node |
vertexai_feature_store/feature_store | Node count | Count | aiplatform_googleapis_com/featurestore/node_count |
vertexai_feature_store/feature_store | Entities updated on the Featurestore online storage | Byte | aiplatform_googleapis_com/featurestore/online_entities_updated |
vertexai_feature_store/feature_store | Latencies | MilliSecond | aiplatform_googleapis_com/featurestore/online_serving/latencies |
vertexai_feature_store/feature_store | Request size | Byte | aiplatform_googleapis_com/featurestore/online_serving/request_bytes_count |
vertexai_feature_store/feature_store | Serving count | Count | aiplatform_googleapis_com/featurestore/online_serving/request_count |
vertexai_feature_store/feature_store | Response size | Byte | aiplatform_googleapis_com/featurestore/online_serving/response_size |
vertexai_feature_store/feature_store | Billable bytes | Byte | aiplatform_googleapis_com/featurestore/storage/billable_processed_bytes |
vertexai_feature_store/feature_store | Bytes stored | Byte | aiplatform_googleapis_com/featurestore/storage/stored_bytes |
vertexai_feature_store/feature_store | Offline storage write for streaming write | Count | aiplatform_googleapis_com/featurestore/streaming_write/offline_processed_count |
vertexai_feature_store/feature_store | Streaming write to offline storage delay time | Second | aiplatform_googleapis_com/featurestore/streaming_write/offline_write_delays |
vertexai_location/default_metrics | Executing PipelineJobs | Count | aiplatform_googleapis_com/executing_vertexai_pipeline_jobs |
vertexai_location/default_metrics | Executing PipelineTasks | Count | aiplatform_googleapis_com/executing_vertexai_pipeline_tasks |
vertexai_location/default_metrics | Generate content requests per minute per project per base model | Count | aiplatform_googleapis_com/generate_content_requests_per_minute_per_project_per_base_model |
vertexai_location/default_metrics | Online prediction dedicated requests per base model version | Count | aiplatform_googleapis_com/online_prediction_dedicated_requests_per_base_model_version |
vertexai_location/default_metrics | Online prediction dedicated tokens per minute per base model version | Count | aiplatform_googleapis_com/online_prediction_dedicated_tokens_per_base_model_version |
vertexai_location/default_metrics | Online prediction requests per base model | Count | aiplatform_googleapis_com/online_prediction_requests_per_base_model |
vertexai_location/default_metrics | Online prediction tokens per minute per base model | Count | aiplatform_googleapis_com/online_prediction_tokens_per_minute_per_base_model |
vertexai_location/default_metrics | Generate content requests per minute per project per base model quota exceeded error | Count | aiplatform_googleapis_com/quota/generate_content_requests_per_minute_per_project_per_base_model/exceeded |
vertexai_location/default_metrics | Generate content requests per minute per project per base model quota limit | Count | aiplatform_googleapis_com/quota/generate_content_requests_per_minute_per_project_per_base_model/limit |
vertexai_location/default_metrics | Generate content requests per minute per project per base model quota usage | Count | aiplatform_googleapis_com/quota/generate_content_requests_per_minute_per_project_per_base_model/usage |
vertexai_location/default_metrics | Online prediction dedicated requests per base model version quota exceeded error | Count | aiplatform_googleapis_com/quota/online_prediction_dedicated_requests_per_base_model_version/exceeded |
vertexai_location/default_metrics | Online prediction dedicated requests per base model version quota limit | Count | aiplatform_googleapis_com/quota/online_prediction_dedicated_requests_per_base_model_version/limit |
vertexai_location/default_metrics | Online prediction dedicated requests per base model version quota usage | Count | aiplatform_googleapis_com/quota/online_prediction_dedicated_requests_per_base_model_version/usage |
vertexai_location/default_metrics | Online prediction dedicated tokens per minute per base model version quota exceeded error | Count | aiplatform_googleapis_com/quota/online_prediction_dedicated_tokens_per_base_model_version/exceeded |
vertexai_location/default_metrics | Online prediction dedicated tokens per minute per base model version quota limit | Count | aiplatform_googleapis_com/quota/online_prediction_dedicated_tokens_per_base_model_version/limit |
vertexai_location/default_metrics | Online prediction dedicated tokens per minute per base model version quota usage | Count | aiplatform_googleapis_com/quota/online_prediction_dedicated_tokens_per_base_model_version/usage |
vertexai_location/default_metrics | Online prediction requests per base model quota exceeded | Count | aiplatform_googleapis_com/quota/online_prediction_requests_per_base_model/exceeded |
vertexai_location/default_metrics | Online prediction requests per base model quota limit | Count | aiplatform_googleapis_com/quota/online_prediction_requests_per_base_model/limit |
vertexai_location/default_metrics | Online prediction requests per base model quota usage | Count | aiplatform_googleapis_com/quota/online_prediction_requests_per_base_model/usage |
vertexai_location/default_metrics | Online prediction tokens per minute per base model quota exceeded | Count | aiplatform_googleapis_com/quota/online_prediction_tokens_per_minute_per_base_model/exceeded |
vertexai_location/default_metrics | Online prediction tokens per minute per base model quota limit | Count | aiplatform_googleapis_com/quota/online_prediction_tokens_per_minute_per_base_model/limit |
vertexai_location/default_metrics | Online prediction tokens per minute per base model quota usage | Count | aiplatform_googleapis_com/quota/online_prediction_tokens_per_minute_per_base_model/usage |
vertexai_pipeline_job/pipelines | PipelineJob duration | Second | aiplatform_googleapis_com/pipelinejob/duration |
vertexai_pipeline_job/pipelines | Completed PipelineTasks | Count | aiplatform_googleapis_com/pipelinejob/task_completed_count |
vertexai_index/vector_search | Datapoint count | Count | aiplatform_googleapis_com/matching_engine/stream_update/datapoint_count |
vertexai_index/vector_search | Stream update latencies | MilliSecond | aiplatform_googleapis_com/matching_engine/stream_update/latencies |
vertexai_index/vector_search | Request count | Count | aiplatform_googleapis_com/matching_engine/stream_update/request_count |
vertexai_index_endpoint/vector_search | CPU request utilization | Percent | aiplatform_googleapis_com/matching_engine/cpu/request_utilization |
vertexai_index_endpoint/vector_search | Current replicas | Count | aiplatform_googleapis_com/matching_engine/current_replicas |
vertexai_index_endpoint/vector_search | Current shards | Count | aiplatform_googleapis_com/matching_engine/current_shards |
vertexai_index_endpoint/vector_search | Memory usage | Byte | aiplatform_googleapis_com/matching_engine/memory/used_bytes |
vertexai_index_endpoint/vector_search | Request latency | MilliSecond | aiplatform_googleapis_com/matching_engine/query/latencies |
vertexai_index_endpoint/vector_search | Request count | Count | aiplatform_googleapis_com/matching_engine/query/request_count |
vertexai_publisher_model/default_metrics | Character count | Count | aiplatform_googleapis_com/publisher/online_serving/character_count |
vertexai_publisher_model/default_metrics | Characters | Count | aiplatform_googleapis_com/publisher/online_serving/characters |
vertexai_publisher_model/default_metrics | Character Throughput | Count | aiplatform_googleapis_com/publisher/online_serving/consumed_throughput/count |
vertexai_publisher_model/default_metrics | First token latencies | MilliSecond | aiplatform_googleapis_com/publisher/online_serving/first_token_latencies |
vertexai_publisher_model/default_metrics | Model invocation count | Count | aiplatform_googleapis_com/publisher/online_serving/model_invocation_count |
vertexai_publisher_model/default_metrics | Model invocation latencies | MilliSecond | aiplatform_googleapis_com/publisher/online_serving/model_invocation_latencies |
vertexai_publisher_model/default_metrics | Token count | Count | aiplatform_googleapis_com/publisher/online_serving/token_count |
vertexai_publisher_model/default_metrics | Tokens | Count | aiplatform_googleapis_com/publisher/online_serving/tokens |
visionai_instance/vision_ai | Request count | Count | visionai_googleapis_com/platform/connected_service/request_count |
visionai_instance/vision_ai | Request latencies | MilliSecond | visionai_googleapis_com/platform/connected_service/request_latencies |
visionai_instance/vision_ai | Prediction count | Count | visionai_googleapis_com/platform/custom_model/predict_count |
visionai_instance/vision_ai | Prediction latencies | MilliSecond | visionai_googleapis_com/platform/custom_model/predict_latencies |
visionai_instance/vision_ai | Uptime | MilliSecond | visionai_googleapis_com/platform/instance/uptime |
visionai_stream/vision_ai | Received bytes | Byte | visionai_googleapis_com/stream/network/received_bytes_count |
visionai_stream/vision_ai | Received packets | Count | visionai_googleapis_com/stream/network/received_packets_count |
visionai_stream/vision_ai | Sent bytes | Byte | visionai_googleapis_com/stream/network/sent_bytes_count |
visionai_stream/vision_ai | Sent packets | Count | visionai_googleapis_com/stream/network/sent_packets_count |