This page showcases new features, changes, and bug fixes in Dynatrace SaaS version 1.319. It contains:
Application Observability
Showing relevant data in context is one of the core strengths of Dynatrace. To deliver more insights and improved experience, we now provide telemetry access in the Distributed Tracing,
Kubernetes,
Clouds,
Infrastructure & Operations, and
Services apps to every user free of charge.
You can view related logs attached to a distributed trace while investigating performance issues or doing other analysis in Distributed Tracing without incurring query charges. For advanced log analysis, use the Logs app.
Similarly, you now see related logs and important events in the context of a Service, Kubernetes object, Cloud service, Host, and Process at no cost in the Services,
Kubernetes,
Clouds, and
Infrastructure & Operations apps. This allows you to examine issues by reviewing recent log lines. For more advanced analytics needs, use the Logs app.
Account Management | Subscriptions and Licensing
This release introduces a significantly improved process for purchasing Dynatrace Platform Subscriptions through private offers on the AWS Marketplace.
The new interface and workflow allows you to link your Dynatrace Platform Subscription with an existing Dynatrace account or generate a new Dynatrace account.
For details, see AWS Marketplace private offer.
Software Delivery
The new Upgrade Classic SLOs documentation describes the advantages of upgrading your SLO experience by switching to the Service-Level Objectives app, and shows you how to create new SLOs based on existing classic SLOs.
Infrastructure Observability | Kubernetes
Dashboards and metric events based on classic Kubernetes metrics can now be upgraded to Grail metrics. Both upgrade processes now also include Kubernetes classic metrics, which are migrated to their Grail counterpart.
Note: There may still be metrics that can't. For those metrics, a message in the dashboard offers a link to the Kubernetes metrics migration guide, which provides clear guidance on migrating your metrics.
Application Security
With this release, you can now add timeseries context to your log data in Security Investigator.
For example, suppose you have fetched error logs and would like to see the CPU consumption at the time of an error.
Right-click the event, select View performance metrics, select the dimension for which you would like to see the metrics (for example, Kubernetes container), and select the metric type (for example, CPU usage).
A metrics chart is created visualizing the container CPU usage around the time of the error.
Platform | DQL
The ~
string matching operator introduced with the search command is now supported across all DQL expressions, including the filter
command.
The ~
operator performs a case-insensitive search for a string token. It is a simple and powerful addition to existing string-matching functions like matchesPhrase()
and contains()
.
The ~
operator is particularly valuable for filtering, where you can:
For example, the ~
operator can now be used in the filter command:
fetch logs| parse content, "IPV4:ip LD HTTPDATE:time ']' LD:text"| filter text ~ "setup.php"
To learn more, see DQL operators.
Infrastructure Observability | Extensions
You can now select a Programmatic Access Token (PAT) instead of using a password for the Snowflake database extension.
Infrastructure Observability | Extensions
Credential Vault integration with your SQL Server extension enables you to save and use NTLM and Kerberos authentication saved in the credential vault.
Platform | Settings
Starting with Dynatrace version 1.319, the OTLP metrics ingest attribute deny list will contain a handful of high-cardinality attributes by default.
Infrastructure Observability | Hosts
The host's additional system info will now be gathered only once during OS Agent runtime.
Platform | Notebooks and Dashboards
The table visualization is now the default for all data types you query in Dashboards or
Notebooks, such as logs, events, and entities.
The only exception is if a query returns time-series data, in which case a line chart is automatically displayed.
Of course, you can manually select any of the wide range of available visualizations as before.
Platform | Notebooks and Dashboards
The table visualization now offers a Download result > CSV (raw) option next to Download result > CSV.
Platform | Notebooks and Dashboards
In Dashboards and
Notebooks, the default truncation for chart legends is now "end" instead of "middle". You can adjust this via the Legend and tooltip settings in your dashboards and notebooks.
Application Security | Vulnerabilities
cluster-
prefix in compliance eventsFor COMPLIANCE_FINDING
events, the cluster-
prefix has been removed from the value of the k8s.cluster.uid
field. This change improves consistency and simplifies cluster identification across different data sources.
This update is visible only when both of the following conditions are met:
Platform | Settings
You can now use a new condition in IAM statements to manage access to entity settings based on their security context attribute. For an example, see Permissions and access in the settings documentation.
content-type: text/plain
to improve usage from Notebooks and Workflows. The OpenPipeline ingest endpoints for events-related data types respond with HTTP 202 status and empty body when data is accepted by OpenPipeline. The response formerly set the content-type
header to application/octet-stream
and set the vary
header to Origin
, which caused issues on the HTTP client used in Dynatrace Workflows and Notebooks. With this release, we have slightly changed the behavior of the API so that those endpoints can be easily called from Workflows and Notebooks. After the change, the content-type is set to text/plain
and there should be no vary: Origin
header at all. This change should not impact advanced HTTP clients, as the content-length
of those responses was and still is always 0
. (PPX-5678)To learn about changes to the Dynatrace API in this release, see Dynatrace API changelog version 1.319.