Expand visibility to improve health and performance monitoring of your Snowflake.
Improve health and performance monitoring of your Snowflake via metrics and Davis AI.
Snowflake monitoring is based on a remote monitoring approach implemented as a Dynatrace ActiveGate extension. The extension queries the Account Usage and Information Schema for key performance and health metrics and extends your visibility by allowing Davis AI to provide anomaly detection and root cause analysis.
Use this extension to:
INFORMATION_SCHEMAACCOUNT_USAGEThe user must have an ACCOUNTADMIN role or a role granted by the ACCOUNTADMIN user.
To activate the remote monitoring extension (version 1.255+):
Go to Manage > Dynatrace Hub.
Select Snowflake and select Add to environment to active the extension.
Configure a user with necessary permissions. To do so
In Snowflake, run the following to create a custom role:
use role ACCOUNTADMIN;grant imported privileges on database snowflake to role SYSADMIN;use role SYSADMIN;
Add new monitoring configuration.
zzyxxx-tc12345.snowflakecomputing.com)The execution of the data retrieval queries requires warehouse to be active for both feature sets.
snowflake.account.availability metric is an exception. It gets retrieved every minute and doesn't require a warehouse.
The Snowflake extension queries for metrics every minute. In both Classic and DPS Licensing models, license consumption is based on metric datapoints. To estimate the amount of metric datapoints produced by an extension configuration per minute, use the following formula:
default: 9 x Snowflake Warehouse + 7 x Snowflake Account
The formula estimates a number of metrics per the feature set and a multiplier value depending on which entity the metric is split by.
For DPS license, you can calculate the approximate yearly datapoints consumption by using the following formula:
<metric datapoints per minute> x 60 minutes x 24 hours x 365 days
In the classic licensing model, metric ingestion will consume Davis Data Units (DDUs) at the rate of .001 DDUs per metric data point.
Multiply the above formula for annual data points by .001 to estimate annual DDU usage.
This extension can also ingest logs if the queries feature set is selected. For more information, see Logs powered by Grail (DPS) or DDUs for Log Management and Analytics depending on your license model.
For more information about licensing costs, see Extending Dynatrace (Davis data units) or Metrics powered by Grail overview (DPS) depending on your license model.
Queries feature set must be enabled to run DQL queries.
Example of a DQL query that lists longest running queries:
fetch logs| filter matchesValue(dt.extension.name, "com.dynatrace.extension.sql-snowflake")| filter matchesValue(event.group, "longest_queries")| sort asDouble(execution_status) desc
Example of a DQL query that lists failed queries:
fetch logs| filter matchesValue(dt.extension.name, "com.dynatrace.extension.sql-snowflake")| filter matchesValue(event.group, "failed_queries")| sort timestamp desc
When activating your extension using monitoring configuration, you can limit monitoring to one of the feature sets. To work properly the extension has to collect at least one metric after the activation.
In highly segmented networks, feature sets can reflect the segments of your environment. Then, when you create a monitoring configuration, you can select a feature set and a corresponding ActiveGate group that can connect to this particular segment.
All metrics that aren't categorized into any feature set are considered to be the default and are always reported.
A metric inherits the feature set of a subgroup, which in turn inherits the feature set of a group. Also, the feature set defined on the metric level overrides the feature set defined on the subgroup level, which in turn overrides the feature set defined on the group level.
| Metric name | Metric key | Description |
|---|
| Metric name | Metric key | Description |
|---|---|---|
| Snowflake availability | snowflake.account.availability | Whether Snowflake responds to queries or not |
| Warehouse compute credits used | snowflake.account.warehouse.credits.compute | Number of credits used for the warehouse. |
| Warehouse cloud services credits used | snowflake.account.warehouse.credits.cloudServices | Number of credits used for cloud services for the warehouse. |
| Warehouse credits used | snowflake.account.warehouse.credits.total | Number of credits billed for the warehouse. |
| Account table storage bytes | snowflake.account.storage.table | Number of bytes of table storage used, including bytes for data currently in Time Travel. |
| Account stage storage bytes | snowflake.account.storage.stage | Number of bytes of stage storage used by files in all internal stages (named, table, and user). |
| Account fail-safe storage bytes | snowflake.account.storage.failsafe | Number of bytes of data in Fail-safe. |
| Warehouse compilation time | snowflake.account.warehouse.time.compilation | Query compillation time per warehouse. |
| Warehouse execution time | snowflake.account.warehouse.time.execution | Query execution time per warehouse. |
| Warehouse elapsed time | snowflake.account.warehouse.time.elapsed | Query total elapsed time per warehouse. |
| Warehouse queued provisioning time | snowflake.account.warehouse.time.queued.provisioning | Query queued provisioning time per warehouse. |
| Warehouse queued overload time | snowflake.account.warehouse.time.queued.overload | Query queued overload time per warehouse. |
| Warehouse blocked time | snowflake.account.warehouse.time.blocked | Query blocked time per warehouse. |
| Account compute credits | snowflake.account.credits.compute | Number of credits used by warehouses and serverless compute resources. |
| Account cloud services credits | snowflake.account.credits.cloudServices | Number of credits used for cloud services. |
| Account total credits used | snowflake.account.credits.total | Total number of credits used by the account. |