Data flow

Metrics Preview

Metrics in OpenPipeline are currently in Preview and only accessible to selected customers. If you would like to share feedback or ideas, please join our dedicated Community user group, or reach out to your Customer Success Manager.

For more information, see

With OpenPipeline, you can ingest data in the Dynatrace platform from a wide variety of formats and providers, through ingest sources. Data is then routed to pipelines for processing, and stored in Grail buckets.

How does OpenPipeline work

Key terms

Pipeline

Collection of processing instructions to structure, separate, and store data.

Data type

Data types, such as logs and events, provide observability insights into the health, performance, and behavior of your system enabling teams to detect, diagnose, and resolve problems. Each data type offers a different perspective because of its unique characteristics. OpenPipeline provides a unified solution to configure ingestion and processing while ensuring flexibility in configuration options depending on the data type.

The following table lists data types, summarizing from which Dynatrace version the data type is supported by OpenPipeline.

Data typeDynatrace version

Logs

1.295

Metrics API only

Preview

Topology

Future

Spans

1.304 API only

Events—Generic events

1.295

Events—SDLC events

1.299 API only

Events—Security events

1.296

Business events

1.295

System events 1

1.302

1

System events supported by OpenPipeline are limited to App Lifecycle Notifications (event.kind == "AUDIT_EVENT" AND event.provider == "APP_REGISTRY") and Workflow Execution events (event.kind == "WORKFLOW_EVENT" AND event.provider == "AUTOMATION_ENGINE").

Ingest sources

Data reaches the Dynatrace platform via different ingestion sources, such as API endpoints, OneAgent, and extensions, which collect data from data providers. In OpenPipeline, they are defined by a name and a path. You can leverage:

  • Built-in ingest sources
  • Custom ingest sources for events, which support pre-processing and static routing.

Once the records reach your Dynatrace SaaS environment via ingest sources, you can route it to a pipeline.

The following table lists the ingest sources for each data type supported by OpenPipeline.

Data typeIngest sourcePath

Logs

OneAgent

- or oneagent

Extensions

- or extension

Log ingest API

/api/v2/logs/ingest

OTLP ingest API

/api/v2/otlp/v1/logs

Metrics

OneAgent

- or oneagent

OpenTelemetry metrics ingest API

/api/v2/otlp/v1/metrics

Metrics API - POST ingest data points

/api/v2/metrics/ingest

Spans

OneAgent

- or oneagent

OpenTelemetry

/api/v2/otlp/v1/traces

Events—Generic events

OpenPipeline Ingest API - POST Built-in generic events

/platform/ingest/v1/events

OpenPipeline Ingest API - POST Custom generic event endpoint

/platform/ingest/custom/events/<custom-endpoint-name>

Events—SDLC events

OpenPipeline Ingest API - POST Built-in SDLC events

/platform/ingest/v1/events.SDLC

OpenPipeline Ingest API - POST Custom SDLC event endpoint

/platform/ingest/custom/events.SDLC/<custom-endpoint-name>

Events—Security events

OpenPipeline Ingest API - POST Built-in security events

/platform/ingest/v1/events.security

OpenPipeline Ingest API - POST Custom security event endpoint

/platform/ingest/custom/events.security/<custom-endpoint-name>

Business events

OneAgent

- or oneagent

RUM Agent

- or rumagent

Business Events API

/api/v2/bizevents/ingest

Data Extraction

data_extraction

System events

Internally generated

- or system_events

Use cases

  • Configure multiple pipelines for the same data type, adopting processing instructions specific to the ingest source.

Best practices

  • To get started with OpenPipeline ingestion via API, reference Ingestion APIs.
  • To learn the path and type of the system events processed in your environment
    1. Go to Notebooks.
    2. Create a new notebook containing the following query
      fetch dt.system.events
      | filter isNotNull(dt.openpipeline.pipelines)
    3. Select Run.

Pre-processing

Optional data processing that occurs after ingestion and before routing. By setting pre-processing, you can transform raw data into structured formats as soon as it reaches your Dynatrace SaaS environment. Pre-processed data is then routed to a pipeline and is available for further processing before storage. Note that pre-processing is available only for custom ingest sources.

Use cases

  • Apply a unified structure to different providers' data formats.

Best practices

Set up pre-processing to avoid creating complex matching conditions based on provider-specific data formats. This will help you streamline maintenance for routing and processing, for example, when you start ingesting data from a new provider.

Routing

After data is ingested (and optionally pre-processed), it's routed to pipelines. Routing depends on

  • The data type

    Pipelines are specific to a data type. Different data types are routed to different pipelines.

  • The ingest source

    You can configure routing for each ingest source. Multiple ingest sources can be routed to the same pipeline if they are of the same data type.

  • The routing option

    • Static routing: data is routed to a specific pipeline, which remains fixed unless manually updated. Note that static routing is available only for custom ingest sources.

    • Dynamic routing: data is routed based on a matching condition. The matching condition is a DQL query that defines the data set you want to route.

    If a record matches the condition but you've already configured static routing for its ingest source, the match is skipped and data records are routed directly to the pipeline you specified.

    The following table summarizes which routing options are supported for available data types.

    Data typeSupported routing option

    Logs

    Dynamic

    Metrics

    Dynamic

    Spans

    Dynamic

    Events—Generic events

    Dynamic or static

    Events—SDLC events

    Dynamic or static

    Events—Security events

    Dynamic or static

    Business events

    Dynamic

    System events

    Dynamic

Use cases

  • Route data of an ingest source to a dedicated pipeline.

Best practices

When multiple routing options are available, choose according to the data set dimension. For example, large data sets benefit more from dynamic routing.

Processing

OpenPipeline processing occurs in pipelines containing instructions on how to structure, separate, and store your data. To learn more, see Processing.

Storage

Dynatrace Grail database provides a single unified storage solution for all your data types. OpenPipeline target storage are Grail buckets. You can leverage built-in buckets and, if available for the data type, create new buckets with custom retention periods. Each bucket is assigned to a DQL database table. Assign permissions to user groups or single users to provide them with access to specific buckets and tables.

By default, OpenPipeline routes data into a built-in pipeline with target storage built-in Grail bucket of the data type. You can configure storage assignment

  • For a custom ingest source, by directly defining its targeted storage.
  • For a pipeline, based on processing matching conditions.
Exceptions for system events

Storage and retention for system events is not configurable.