OpenPipeline limits

  • Latest Dynatrace
  • Reference
  • 4-min read

OpenPipeline limits are enforced

  • At different times, including ingest time, processing, and pipeline configuration.
  • At different levels, such as per configuration scope or per pipeline.

Record-type limits

Each record type, such as logs, events, or spans, has a corresponding configuration scope in OpenPipeline. When the limit of the record-type limit is more restrictive than the OpenPipeline limit, the the record-type limit takes precedence. For limits specific to each record-type, see:

Ingestion

These limits are enforced at ingest time and vary by signal type.

Record maximum timestamp

If the timestamp is more than 10 minutes in the future, it's adjusted to the ingest server time plus 10 minutes.

Record minimum timestamp

The following table defines the earliest accepted timestamp by signal group. Records outside the accepted range are dropped before processing begins.

ItemEarliest timestamp

Logs, Events, Business Events, System events

The ingest time minus 24 hours

Metrics, extracted metrics, and Davis events

The ingest time minus 1 hour

Request payload size

The request payload size maximum limit is 10 MB per configuration scope.

OpenPipeline Ingest API

Limits that apply when data is ingested via the OpenPipeline Ingest API endpoints:

Timestamp value

Numerical and string timestamp values are supported. OpenPipeline parses the timestamp as follows.

  • Numerical values
    • Up to 100_000_000_000 are parsed as SECONDS.
    • Up to 100_000_000_000_000 are parsed as MILLISECONDS.
    • Up to 9_999_999_999_999_999 are parsed as MICROSECONDS.
  • String values are parsed either as
    • UNIX epoch milliseconds or seconds
    • RFC3339 formats
    • RFC3164 formats
  • For other values that cannot be parsed, timestamp is overwritten with the ingest time.

If the record doesn't have a timestamp field, the field timestamp is set to ingest time.

OpenPipeline configuration

OpenPipeline configuration limits apply at different levels, such as

  • Per configuration scope
  • Per pipeline
  • Per pipeline group

Configuration scope

The following table defines per-configuration-scope limits.

ItemMaximum limit

Pipelines number 1

100

Total pipeline objects size

70 MB

Routes number

3,000

Total routing object size

10 MB

Ingest sources number

100

Total ingest source objects size

30 MB

1

For all pipeline types, including custom pipelines used as standalone or as member pipelines, built-in pipelines, and composition pipelines.

Pipeline

The following table defines per-pipeline limits.

ItemMaximum limit

Processors number

1,000

Processors number in a composition pipeline

100

Pipeline group

The following table defines per-pipeline-group limits.

ItemMaximum limit

Pipeline slots number

10

Member pipelines number

1,000

Pipeline role

The pipeline role is permanent. Converting roles—from member to composition, or composition to member—isn't supported.

Allowed characters in the endpoint path

The endpoint path is a unique name starting with a literal that defines the endpoint. It's case-insensitive and supports alphanumeric characters and dot (.). For example: Endpoint.1.

Endpoint path doesn't support:

  • Dot (.) as the last character
  • Whitespaces
  • Consecutive dots (..)
  • Null or empty input

Processor matching condition

The maximum length of the processor matching condition is 1,500 characters.

DQL processor

The maximum lenght of the DQL processor script is 8,192 characters.

Smartscape node processor

The Smartscape ID calculation supports string only. The ID components must be of type string.

Pre-process records to convert the values you need to the string data type.

Restricted fields

OpenPipeline restricts the use of certain fields at configuration time. Some restrictions apply across all configuration scopes; others apply only within the configuration scope of a specific signal type.

All configuration scopes

Some fields are view-only, and others are available only in stages after the Processing stage.

  • The following fields can be viewed-only; editing via OpenPipeline is not supported.

    • dt.ingest.*
    • dt.openpipeline.*
    • dt.retain.*
    • dt.system.*
  • The following fields are added after the Processing stage when Dynatrace runs its entity detection. You can use them only in stages after the Processing stage, but not in pre-processing, routing, or the Processing stage.

    • dt.entity.aws_lambda_function
    • dt.entity.cloud_application
    • dt.entity.cloud_application_instance
    • dt.entity.cloud_application_names
    • dt.entity.custom_device
    • dt.entity.<genericEntityType>
    • dt.entity.kubernetes_cluster
    • dt.entity.kubernetes_node
    • dt.entity.kubernetes_service
    • dt.entity.service
    • dt.env_vars.dt_tags
    • dt.kubernetes.cluster.id
    • dt.kubernetes.cluster.name
    • dt.loadtest.custom_entity.enriched_custom_device_name
    • dt.process.name1
    • dt.source_entity
    • k8s.cluster.name2
    1

    Note that dt.process.name is available only in classic pipelines. To obtain equivalent results before the Processing stage, use dt.process_group.detected_name instead.

    2

    OneAgent version 1.309Dynatrace Operator version 1.4.2+The field is available before the Processing stage if OneAgent Log module is running in standalone mode.

Metric configuration scope

These field restrictions apply to the metric configuration scope and are enforced in addition to any native limits of the metrics signal type.

  • Fields excluded from dynamic route matching conditions and in the Processing stage

    • dt.entity.*
  • Fields excluded from the Processing stage

    • dt.system.monitoring_source
    • metric.key
    • metric.type
    • timestamp
    • value
Span configuration scope

These field restrictions apply in the spans configuration scope and are enforced in addition to any native limits of the spans signal type.

  • Fields excluded from dynamic route matching conditions and in the Processing stage

    • dt.entity.service
    • endpoint.name
    • failure_detection.*
    • request.is_failed
    • request.is_root_span
    • service_mesh.is_proxy
    • service_mesh.is_failed
    • supportability.*
  • Fields excluded from the Processing stage

    • dt.ingest.size
    • dt.retain.size
    • duration
    • end_time
    • span.id
    • start_time
    • trace.id

Processing

These limits are enforced when records are processed in OpenPipeline and apply across all configuration scopes. Exceeding a processing limit can cause a record to be dropped or have further processing stopped.

Processing memory exhaustion

Processing memory is limited. Each change to a record—for example, parsing a field—decreases the available processing memory. Once the available processing memory is exhausted, the record is dropped. This is reported in metric dt.sfm.openpipeline.not_stored.records with dimension reason set to buffer_overflow.

Size of record after processing

The maximum size of a record after processing is 16 MB.

Size of extracted log attributes

Log attributes can be up to 32 KB in size. When log attributes are added to the event template, the size of each attribute is truncated to 4,096 characters.

Number of extractions for a single record

You can extract data on a single record in a maximum of five different pipelines (dt. open pipeline.pipelines). Once the threshold is exceeded, data extraction is no longer performed on the single record. The record continues to be processed and persisted.

Schema validation for logs

This processing-time validation applies to all logs processed in OpenPipeline and determines whether a processed record is stored. A processed log record is persisted only if all the following field conditions are satisfied. If the schema is not valid, the log is dropped.

FieldExistsAccepted TypesValue Constraints

timestamp

Yes

String, Numerical

Within the ingestion range

content

Yes

String

Not evaluated

Related tags
Dynatrace Platform