Improve the health and performance monitoring of your Microsoft SQL Servers.






Monitor Microsoft SQL Server health and performance with Dynatrace extension.
Microsoft SQL Server database monitoring is based on a remote monitoring approach implemented as a Dynatrace ActiveGate extension. The extension queries MS SQL databases for key performance and health metrics, extending your visibility, and allowing DAVIS AI to provide anomaly detection and problem analysis.
Each available Feature Set is supported by a corresponding set of SQL Server types. For details on the individual permissions that must be granted to the extension user for each Feature Set, please refer to the Involved Views and Tables section and the granular permission details for each system view provided below.
Supported on:
Views and tables involved:
Supported on:
Involved Views and Tables:
Supported on:
Involved Views and Tables:
Supported on:
Involved Views and Tables:
Monitoring query performance stats
Supported on:
Involved Views and Tables:
Monitoring TOP longest queries
Supported on:
Involved Views and Tables:
Supported on:
Involved Views and Tables:
Supported on:
Involved Views and Tables:
Supported on:
Involved Views and Tables:
Monitoring age of latest backup and individual backups per database
Supported on:
Involved Views and Tables:
Monitoring backup files size per database
Supported on:
Involved Views and Tables:
Monitoring individual Azure SQL Database backups
Supported on:
Involved Views and Tables:
Monitoring database files stats
Supported on:
Involved Views and Tables:
Monitoring largest database files on Azure SQL Database
Supported on:
Involved Views and Tables:
Monitoring largest database files on other SQL Server types
Supported on:
Involved Views and Tables:
Supported on:
Involved Views and Tables:
Supported on:
Involved Views and Tables
Required permissions:
VIEW SERVER PERFORMANCE STATE permission.VIEW SERVER STATE permission.##MS_ServerStateReader## server role.VIEW DATABASE STATE permission on the database; or##MS_ServerStateReader## server role.VIEW SERVER STATE permission.VIEW SERVER PERFORMANCE STATE permission.VIEW SERVER STATE permission.##MS_ServerStateReader## server role.VIEW DATABASE STATE permission on the database; or##MS_ServerStateReader## server role.VIEW SERVER STATE permission.master database for all databases to be visible.master database are visible.ONLINE databases:
VIEW ANY DATABASE (default permission for the public role)OFFLINE databases as well:
ALTER ANY DATABASE on server level; orCREATE DATABASE permission in the master database.VIEW DATABASE STATE permission.VIEW DATABASE STATE permission.VIEW DATABASE STATE permission.VIEW DATABASE STATE permission.VIEW DATABASE STATE permission.VIEW SERVER PERFORMANCE STATE permission.VIEW SERVER STATE permission.VIEW ANY DEFINITION; orCREATE DATABASE; orALTER ANY DATABASE.VIEW ANY DEFINITION permission.VIEW ANY DEFINITION permission.VIEW ANY DATABASE; orALTER ANY DATABASE; orCREATE DATABASE permission in master is required.VIEW SERVER PERFORMANCE STATE permission.VIEW SERVER STATE permission.VIEW SERVER PERFORMANCE STATE permission.VIEW SERVER STATE permission.VIEW SERVER PERFORMANCE STATE permission.VIEW SERVER STATE permission.##MS_ServerStateReader## server role.VIEW DATABASE STATE permission on the database; or##MS_ServerStateReader## server role.Important note: The extension is reported to work with other types of SQL Server, such as AWS RDS or SQL Server on Linux, but they are not officially supported.
Important note: Other types of replication and HA monitoring, including publisher/subscriber model, are not supported yet.
Any version of SQL Server with active extended support by Microsoft is supported by this extension. Please refer to the official Microsoft documentation about lifecycle dates for SQL Server.
DDUs are consumed at 0.001 DDU from your available quota for each ingested data point. Each enabled feature set increases DDU consumption. The "default" feature set cannot be turned off.
DDU consumption for each metric (per hour) is calculated as follows:
number of unique associated entities * retrieval frequency per hour * 0.001 DDUs per data point
Example:
sql-server.databases.backup.size2 instances with 20 databases in each.2 (SQL Server Instances) * 20 (SQL Server Databases in each) = 40 unique databases in total.60 (metric is retrieved every minute).40 * 60 * 0.001 = 2.4 DDUs.2.4 * 24 * 365 = 21,024 DDUs.v2.7.0:
v2.0.0:
instance dimension now only contains the name of the actual named instance or MSSQLSERVER by default.hoursSinceBackup metric is removed and replaced by sql-server.databases.backup.age.v1.2.0:
Collection of top queries ordered by total duration can be enabled using the Queries feature set.
Top queries are fetched by extension every 5 minutes.
The query below, when executed in Logs and Events, displays top queries, as observed within the most recent 5 min timeframe, using DQL:
fetch logs, from:now()-60m| filter matchesValue(dt.extension.name, "com.dynatrace.extension.sql-server")| filter matchesValue(event.group, "longest_queries")| fields total_duration, avg_duration, content, server, instance, num_executions, query_plan| sort asDouble(total_duration) desc
Description of fields:
total_duration field represents a sum of all executions of this query over the given 5 min timeframe in secondsavg_duration represents an average execution time of this query of the given 5 min timeframe in secondscontent field contains the SQL text of the queryOn Managed tenants: log records can be retrieved by filtering logs using 2 attributes
dt.extension.name: com.dynatrace.extension.sql-server; andevent.group: longest_queries.Collection of largest database files by size can be enabled using the Database files feature set.
Top database files by size are fetched by extension every 5 minutes.
The query below, when executed in Logs and Events, displays the largest database files, as observed within the most recent 5 min timeframe, by size using DQL:
fetch logs, from:now()-5m| filter matchesValue(dt.extension.name, "com.dynatrace.extension.sql-server")| filter matchesValue(event.group, "largest_files")| fields content, file_size, file_type_desc, file_state_desc, database, server, instance, file_used_space, file_empty_space| sort asDouble(file_size) desc
Description of fields:
content field represents the physical name of the file as handled by host OSfile_size is reported in KBfile_used_space is reported in KB and represents amount of space occupied by allocated pages within a specific filefile_empty_space is reported in KB and represents amount of space that is still empty within a specifc fileOn Managed tenants: log records can be retrieved by filtering logs using 2 attributes
dt.extension.name: com.dynatrace.extension.sql-server; andevent.group: largest_files.Monitoring of current jobs can be enabled using the Jobs feature set.
Current jobs are fetched by extension every 5 minutes.
The query below, when executed in Logs and Events, displays current jobs, as observed within the most recent 5 min timeframe, using DQL:
fetch logs, from:now()-5m| filter matchesValue(dt.extension.name, "com.dynatrace.extension.sql-server")| filter matchesValue(event.group, "current_jobs")| fields job_name, job_status, content, enabled, last_run_outcome, duration, instance, server, start_execution_date, stop_execution_date, job_category, category_name| sort asDouble(duration) desc
Description of fields:
content field represents the last execution outcome message.job_status and last_run_outcome are identical, except for two situations:
job_status equals Idlejob_status equals In Progressduration represents complete job duration in seconds after execution is finishedcategory_id represents the category id of the jobcategory_name represents the Name assigned to the category idOn Managed tenants: log records can be retrieved by filtering logs using 2 attributes
dt.extension.name: com.dynatrace.extension.sql-server; andevent.group: current_jobs.Monitoring of failed jobs can be enabled using the Jobs feature set.
Failed jobs are fetched by extension every 5 minutes.
The query below, when executed in Logs and Events, displays failed jobs, as observed within the most recent 5 min timeframe, using DQL:
fetch logs, from:now()-5m| filter matchesValue(dt.extension.name, "com.dynatrace.extension.sql-server")| filter matchesValue(event.group, "failed_jobs")| fields job_name, step_name, outcome, content, duration, instance, server, sql_severity, retries_attempted, start_execution_date, stop_execution_date| sort stop_execution_date desc
Description of fields:
content field represents the message of the last executed step and usually contains the error.outcome represents the final job status message as composed by SQL Server Agent.duration represents complete job duration in seconds after execution is finishedOn Managed tenants: log records can be retrieved by filtering logs using 2 attributes
dt.extension.name: com.dynatrace.extension.sql-server; andevent.group: failed_jobs.SELECT queries to obtain monitoring data. The database is never modified or locked.sys.* system views and msdb database (when applicable). User databases and objects are never affected.query interval and heavy query interval have been added. If you want the 1-minute queries to run at intervals greater that 1 minute. e.g. every 10 minutes you can enter the number 10 in this box.heavy query interval input box functions the same. Except you are changing the frequency of queries that run every 5 minutes.Locks and waits featureSet. If you are on SaaS you will have a new Dashboard to view this data in an organized single pane of glass. If you are on Managed you will have the logs ingested and you can view requests with locks or waits.The two metrics below
sql-server.databases.file.usedSpacesql-server.databases.file.emptySpaceare only reported for the database the extension is currently connected to. This is due to sys.allocation_units only containing information about used pages of the database that is currently used inside the connection.
master database (limitation of SQL Server itself).Azure backups are monitored by querying the sys.db_database_backups view which is currently available for all Azure SQL Database service tiers except Hyperscaler.
To obtain information about every replica in a given availability group, connect the extension to the server instance that is hosting the primary replica. When connected to a server instance that is hosting a secondary replica of an availability group, the extension returns only local information for the availability group.
When connected to a secondary replica, the extension retrieves states of every secondary database on the server instance. On the primary replica, the extension returns data for each primary database and for the corresponding secondary database.
Depending on the action and higher-level states, database-state information may be unavailable or out of date. Furthermore, the values have only local relevance. See limitations of sys.dm_hadr_database_replica_states.
When a database is added to an availability group, the primary database is automatically joined to the group. Secondary databases must be manually prepared on each secondary replica before they can be joined to the availability group.
If the local server instance cannot communicate with the WSFC failover cluster, for example, because the cluster is down or quorum has been lost, only rows for local availability replicas are returned. These rows will contain only the columns of data that are cached locally in metadata.
sql-server.memory.target
number of SQL Server Instances in environment * 60sql-server.memory.physical
number of SQL Server Instances in environment * 60sql-server.databases.state
number of SQL Server Databases in environment * 60sql-server.uptime
number of SQL Server Instances in environment * 12sql-server.databases.transactions.count
number of SQL Server Databases in environment * 60sql-server.memory.total
number of SQL Server Instances in environment * 60sql-server.cpu.kernelTime.count
number of SQL Server Instances in environment * 60sql-server.general.userConnections
number of SQL Server Instances in environment * 60sql-server.general.processesBlocked
number of SQL Server Instances in environment * 60sql-server.general.logins.count
number of SQL Server Instances in environment * 60sql-server.cpu.userTime.count
number of SQL Server Instances in environment * 60sql-server.memory.virtual
number of SQL Server Instances in environment * 60sql-server.host.cpus
number of SQL Server Hosts in environment * 12sql-server.always-on.ag.secondaryRecoveryHealth
number of SQL Server Availability Groups in environment * 60sql-server.always-on.ag.primaryRecoveryHealth
number of SQL Server Availability Groups in environment * 60sql-server.always-on.ar.failoverMode
number of SQL Server Availability Replicas in environment * 60sql-server.always-on.ag.synchronizationHealth
number of SQL Server Availability Groups in environment * 60sql-server.always-on.ar.operationalState
number of SQL Server Availability Replicas in environment * 60sql-server.always-on.ar.connectedState
number of SQL Server Availability Replicas in environment * 60sql-server.always-on.db.filestreamSendRate
number of SQL Server Availability Databases in environment * 60sql-server.always-on.db.state
number of SQL Server Availability Databases in environment * 60sql-server.always-on.db.synchronizationHealth
number of SQL Server Availability Databases in environment * 60sql-server.always-on.db.logSendQueueSize
number of SQL Server Availability Databases in environment * 60sql-server.always-on.ar.role
number of SQL Server Availability Replicas in environment * 60sql-server.always-on.db.synchronizationState
number of SQL Server Availability Databases in environment * 60sql-server.always-on.db.redoRate
number of SQL Server Availability Databases in environment * 60sql-server.always-on.db.redoQueueSize
number of SQL Server Availability Databases in environment * 60sql-server.always-on.ar.synchronizationHealth
number of SQL Server Availability Replicas in environment * 60sql-server.always-on.db.logSendRate
number of SQL Server Availability Databases in environment * 60sql-server.always-on.ar.availabilityMode
number of SQL Server Availability Replicas in environment * 60sql-server.always-on.ag.automatedBackupPreference
number of SQL Server Availability Groups in environment * 60sql-server.always-on.ar.isLocal
number of SQL Server Availability Replicas in environment * 60sql-server.always-on.ar.recoveryHealth
number of SQL Server Availability Replicas in environment * 60sql-server.databases.backup.age
number of SQL Server Databases in environment * 60sql-server.databases.backup.size
number of SQL Server Databases in environment * 60sql-server.databases.file.emptySpace
number of SQL Server Databases in environment * 60sql-server.databases.file.size
number of SQL Server Databases in environment * 60sql-server.databases.file.usedSpace
number of SQL Server Databases in environment * 60largest_files
Up to 100 (num of files) * 12 * avg log sizesql-server.latches.waits.count
number of SQL Server Instances in environment * 60sql-server.latches.averageWaitTime.count
number of SQL Server Instances in environment * 60sql-server.locks.timeouts.count
number of SQL Server Instances in environment * 60sql-server.locks.waits.count
number of SQL Server Instances in environment * 60sql-server.locks.waitTime.count
number of SQL Server Instances in environment * 60sql-server.locks.deadlocks.count
number of SQL Server Instances in environment * 60sql-server.buffers.checkpointPages.count
number of SQL Server Instances in environment * 60sql-server.memory.grantsOutstanding
number of SQL Server Instances in environment * 60sql-server.memory.connection
number of SQL Server Instances in environment * 60sql-server.buffers.pageWrites.count
number of SQL Server Instances in environment * 60sql-server.buffers.pageLifeExpectancy
number of SQL Server Instances in environment * 60sql-server.memory.grantsPending
number of SQL Server Instances in environment * 60sql-server.buffers.cacheHitRatio
number of SQL Server Instances in environment * 60sql-server.buffers.freeListStalls.count
number of SQL Server Instances in environment * 60sql-server.buffers.pageReads.count
number of SQL Server Instances in environment * 60sql-server.sql.recompilations.count
number of SQL Server Instances in environment * 60sql-server.sql.compilations.count
number of SQL Server Instances in environment * 60sql-server.sql.batchRequests.count
number of SQL Server Instances in environment * 60longest_queries
Up to 100 (num of queries) * 12 * avg log sizeinstance_locks_wait_time_type
number of SQL Server Instances in environment * 60sql-server.replica.bytesSentToTransport.count
number of SQL Server Instances in environment * 60sql-server.replica.sends.count
number of SQL Server Instances in environment * 60sql-server.replica.sendsToTransport.count
number of SQL Server Instances in environment * 60sql-server.replica.bytesReceived.count
number of SQL Server Instances in environment * 60sql-server.replica.bytesSent.count
number of SQL Server Instances in environment * 60sql-server.replica.resentMessages.count
number of SQL Server Instances in environment * 60sql-server.replica.receives.count
number of SQL Server Instances in environment * 60sql-server.sessions
number of SQL Server Instances in environment * 60sql-server.databases.log.flushWaits.count
number of SQL Server Databases in environment * 60sql-server.databases.log.filesUsedSize
number of SQL Server Databases in environment * 60sql-server.databases.log.growths.count
number of SQL Server Databases in environment * 60sql-server.databases.log.truncations.count
number of SQL Server Databases in environment * 60sql-server.databases.log.shrinks.count
number of SQL Server Databases in environment * 60sql-server.databases.log.filesSize
number of SQL Server Databases in environment * 60sql-server.databases.log.percentUsed
number of SQL Server Databases in environment * 60current_jobs
Number of currently enabled jobs * 12 * avg log sizefailed_jobs
top 100 failed jobs * 12 * avg log sizeall_requests
avg Number of active requests * 60 * avg log sizenote on current_jobs, failed_jobs, longest_queries, all_requests, and largest_files: These metrics are based on Log data. As every environment is different the calculation needs to be estimated on the client side. Then calculate the data size ingested. Currently, 100 DDUs are consumed per GB ingested. Please refer to the DDU consumption model for Log Management and Analytics in the documentation. If you are on Log Monitoring Classic: each log record (line, message, entry) deducts 0.005 DDU from your available qouta. Please refer to the DDUs for Log Monitoring Classic in the documentation.