This page showcases new features, changes, and bug fixes in Dynatrace SaaS version 1.325. It contains:
Application Observability | Log Analytics
The Log query usage and costs ready-made dashboard now makes it easier for users to understand and explore how logs are used across Dynatrace.
Application Security
Two new features are available in
Security Posture Management:
An Assessment view filter is now available on the Assessment results page.

The Assessment results table now contains the full name of the compliance standard. This allows you to filter by it and focus on the specific standard.

Application Security | Vulnerabilities
Dynatrace is introducing support for two types of security events related to third-party libraries: vulnerability finding events and vulnerability scan events. These events are stored in the security.events table:
VULNERABILITY_FINDING: Represents a single vulnerability identified in a specific process at a given time. For details, see Semantic Dictionary.VULNERABILITY_SCAN: Represents the analysis of detected packages within a specific process at a given time. For details, see Semantic Dictionary.Prerequisite: Enable third-party vulnerability detection.
Support for these event types is being rolled out gradually across SaaS environments. Not all tenants will have access immediately following the release.
Example DQL query to retrieve vulnerability finding and scan events:dqlfetch security.events| filter in(event.type, array("VULNERABILITY_SCAN", "VULNERABILITY_FINDING"))| filter object.mapping.resource.type == "process"
Infrastructure Observability | Discovery & Coverage
In
Discovery & Coverage, on the Network coverage page, the Configure scanning button has been replaced by a Configure menu with five options to provide complete network coverage:
Platform
OpenPipeline now supports ingesting up to 1 petabyte (PB) of data per day across all signal types, including spans, metrics, logs, and events.
This enhancement ensures that even the most data-intensive environments can rely on OpenPipeline for scalable, high-throughput ingestion and processing. It’s especially beneficial for customers operating large-scale observability pipelines.
Enterprise-grade ingestion and processing flexibility
The OpenPipeline architecture is built for an enterprise-grade scale. It rapidly routes records to the correct pipelines and determines applicable processors, adapting dynamically to your workload demands. This ensures consistent performance even under massive data loads, helping teams avoid bottlenecks and latency regressions.
Self-service configuration with admin control
OpenPipeline empowers teams with self-service configuration to manage their own use cases, including custom alert settings and metric extraction. At the same time, fine-grained admin controls allow central teams to delegate rights and assign buckets, maintaining governance and compliance.
This dual model of flexibility and control allows scalable collaboration across teams, while preserving operational integrity.
Platform | Dashboards and Notebooks
In
Dashboards and
Notebooks, if a query result contains more than 50 table columns, only the first 50 columns are shown by default, and a message is displayed to let you know about it. To change this setting, select the Modify visibility link in the message.
Platform | Dashboards and Notebooks
The Add menu in
Dashboards and
Notebooks is now restructured to always have these core types at the top:
They're followed by the various ways to explore your data and a snippet library.

Platform | Dashboards and Notebooks
The count aggregation function can now be used in
Dashboards and
Notebooks when adding a Reduce to single value command in an explore metric tile.

Platform | Davis
Davis events in Grail for service baselining now consistently include the dt.host_group.id attribute for improved event context and filtering capabilities.
Platform | Davis
You can add a WARNING eventType when ingesting events. It behaves similarly to the CUSTOM_INFO type.
Platform | Davis
Davis event records in Grail now contain Synthetic in the dt.davis.impact_level field if the dt.source_entity of the event is a synthetic monitoring entity.
Platform | Grail
You can now use the MATCH operator to simplify your permission statements requiring wildcards. With this new operator, you can write statements like:
ALLOW storage:logs:read WHERE storage:dt.security_context MATCH ("crn-70100-*", "*-tech-*");
For details, see Permissions in Grail.
Software Delivery
OpenPipeline now supports the extraction of SDLC events for logs.
You can now automatically extract SDLC events from pipeline logs in tools like Jenkins, GitHub, and ArgoCD. This allows out-of-the-box pipeline observability by capturing metadata such as application name, pipeline run details, and version, supporting CI/CD analytics, and root cause analysis enriched with lifecycle metadata.
Software Delivery | Ecosystem
AWS Connector for Workflows now supports ACL on AWS connections. It allows you to configure and share AWS connections with fine granularity so that only specific users, groups, or service users can access them.
A second schema has been introduced to enhance security for AWS connections, and the existing one will be deprecated within the next six months.
For details, see AWS Connector.
To learn about changes to the Dynatrace API in this release, see Dynatrace API changelog version 1.325.