Things to know before upgrading to the latest Dynatrace

  • Latest Dynatrace
  • Explanation
  • 60-min read
  • Published Aug 12, 2025

The latest Dynatrace has been designed from the ground up to meet the scalability and manageability requirements of enterprises in modern cloud and AI native environments.

  • Data scale: Businesses create petabytes of telemetry data each day. Modern observability solutions need to be able to not only ingest and process this amount of data, but also query it for exploratory use cases, calculating metrics or evaluating custom alerts—and all in real-time, joining multiple data types.

  • Organizational scale: It's no longer only ITOps engineers or SREs taking a look into Dynatrace. The more observability shifts left, the more teams have access to telemetry and use it to solve their day-to-day tasks. Whether it's a business department correlating business KPIs with observability signals or developers fixing code issues in production. The latest Dynatrace is supporting the rollout to thousands of users, managing hundreds of teams while still being maintainable.

As a result, we have introduced a couple of new concepts and capabilities that are vital to understand before considering upgrading from the previous Dynatrace.

Concepts at a glance

While management zones have been used widely in Dynatrace classic to define access to data as well as filter data on a global level, they have been replaced by three new concepts, each tailored to satisfy the respective requirements of enterprise environments:

  • Data partitioning: Organize data logically and address performance and retention requirements.
  • Data access control: Stay flexible and meet compliance and security demands by defining fine-grained access to data and Dynatrace platform capabilities based on a user's context.
  • Data segmentation: provides real-time filtering on huge data sets without the need to define thousands of individual rules.

Data partitioning

Dynatrace uses buckets to partition and logically separate the data stored in Grail. Buckets are a foundational concept to address performance and compliance requirements.

A bucket is something like a folder in a file system. Use buckets to combine telemetry data that logically belong together because, for example, they belong to the same region or same environment, or they share the same sensitivity classification.

With data partitioning, you can:

  • Optimize query performance by optimizing data scopes.
  • Control retention and by assigning different lifetimes to different buckets.
  • Apply different licensing models (for example, active Retain with Included Queries for log buckets).
  • Separate data to meet compliance and audit requirements.
  • Align access control through partition-based visibility.

To learn about setting up buckets, see Bucket assignment.

Data access control

The newly introduced Identity and Access Management (IAM) allows you to manage fine-grained permissions for data and Dynatrace features, enforcing enterprise-grade governance across all your data to ensure you meet compliance, audit, and data protection requirements.

Dynatrace uses Attribute-Based Access Control (ABAC) to secure access to your data based on policies. Policies can be defined on attributes such as user role, project, region, data type, and more, and are applied at query time, ensuring access is always current, dynamic, and context-aware.

We provide mechanisms to add security relevant attributes to monitored entities or on ingest via OpenPipeline, ensuring that all telemetry data is appropriately tagged and access-controlled from the moment it enters Dynatrace.

With IAM, you can:

  • Ensure least-privilege access across teams, tools, and use cases.
  • Enforce compliance with enterprise security and audit requirements in dynamic, cloud native environments.
  • Centralize governance, while enabling users to filter data themselves.
  • Create fine-grained rules for large enterprise environments without the need to manage an exploding number of policies.

To learn about IAM, see Identity and access management (IAM) and Access control use cases.

Data segmentation

Segments are dynamic, multidimensional filters that can be used to easily apply the user context (such as the organization they belong to, or the team they work on) or logical business-aligned groupings (such as hyperscaler regions, a specific application or environments) to data stored in Dynatrace.

Segments can be created and managed centrally by admins or individually by users or teams. Once set, they apply the actual working context, combining multiple dimensions and applying filters consistently across every data type. Segments are currently supported by more than 15 apps, with more to come later this year.

Because segments are applied at runtime, they are of course fully governed by existing IAM policies, so users can only view the data they're authorized to access.

With segments, you can:

  • Get contextualized, role-specific data views across Dynatrace apps.
  • Separate data filtering from defining data access, ensuring flexibility and scalability.
  • Use variables (for example, ENV, TEAM) to apply dynamic filtering that adapts to changing infrastructure.
  • Empower individual teams to create tailored data filters to suit their needs.
  • Utilize centralized and individual segment sharing for improved collaboration and onboarding.

Frequently Asked Questions

How can I define segments to restrict access to certain data?

Segments themselves don't restrict access; they filter and contextualize data that users are authorized to see. They provide dynamic, role-based views applying access boundaries, letting teams explore data relevant to them while staying fully governed by existing access policies.

Data access restrictions are enforced by an Attribute-Based Access Control (ABAC) permission management, using policies to define access to data and Dynatrace functionalities.

What's the difference between classic Dynatrace management zones and the new data access/filtering concepts?
  • In the previous Dynatrace, management zones combined data scoping for access control and filtering into one mechanism.

  • The latest Dynatrace separates these concerns into three more scalable concepts:

    • Data partitioning uses buckets for physical and logical data organization, retention, and compliance.

    • Data access enforces fine-grained, attribute-based access controls.

    • Data segmentation use segments to enable dynamic, multidimensional filtering to create context-aware views for users—without altering the underlying access permissions.

When do I need to migrate my existing management zones?

There is currently no end-of-life date for management zones. However, to benefit from advantages introduced with the latest Dynatrace, you'll need to adopt the new concepts and define data partitioning, access, and segmentation, to be able to ingest and query data or use Dynatrace apps.

How should I start structuring my data in the latest Dynatrace SaaS?
  1. Plan your buckets for data partitioning, grouping data based on logical and compliance needs (for example, by region, environment, or sensitivity) to optimize based on retention, cost, and performance requirements.
  2. Implement access controls to enforce who can see what, based on roles or attributes.
  3. Use segments to create business-aligned, dynamic views, empowering teams to explore relevant data in a self-service, governed way.

We will share more practical guidance and examples throughout the next weeks and months, ensuring that you have everything you need to plan and implement your upgrade to the latest Dynatrace.