OpenPipeline limits are enforced
Each record type, such as logs, events, or spans, has a corresponding configuration scope in OpenPipeline. When the limit of the record-type limit is more restrictive than the OpenPipeline limit, the the record-type limit takes precedence. For limits specific to each record-type, see:
These limits are enforced at ingest time and vary by signal type.
If the timestamp is more than 10 minutes in the future, it's adjusted to the ingest server time plus 10 minutes.
The following table defines the earliest accepted timestamp by signal group. Records outside the accepted range are dropped before processing begins.
| Item | Earliest timestamp |
|---|---|
Logs, Events, Business Events, System events | The ingest time minus 24 hours |
Metrics, extracted metrics, and Davis events | The ingest time minus 1 hour |
The request payload size maximum limit is 10 MB per configuration scope.
Limits that apply when data is ingested via the OpenPipeline Ingest API endpoints:
Numerical and string timestamp values are supported. OpenPipeline parses the timestamp as follows.
100_000_000_000 are parsed as SECONDS.100_000_000_000_000 are parsed as MILLISECONDS.9_999_999_999_999_999 are parsed as MICROSECONDS.UNIX epoch milliseconds or secondsRFC3339 formatsRFC3164 formatstimestamp is overwritten with the ingest time.If the record doesn't have a timestamp field, the field timestamp is set to ingest time.
OpenPipeline configuration limits apply at different levels, such as
The following table defines per-configuration-scope limits.
| Item | Maximum limit |
|---|---|
Pipelines number 1 | 100 |
Total pipeline objects size | 70 MB |
Routes number | 3,000 |
Total routing object size | 10 MB |
Ingest sources number | 100 |
Total ingest source objects size | 30 MB |
For all pipeline types, including custom pipelines used as standalone or as member pipelines, built-in pipelines, and composition pipelines.
The following table defines per-pipeline limits.
| Item | Maximum limit |
|---|---|
Processors number | 1,000 |
Processors number in a composition pipeline | 100 |
The following table defines per-pipeline-group limits.
| Item | Maximum limit |
|---|---|
Pipeline slots number | 10 |
Member pipelines number | 1,000 |
The pipeline role is permanent. Converting roles—from member to composition, or composition to member—isn't supported.
The endpoint path is a unique name starting with a literal that defines the endpoint. It's case-insensitive and supports alphanumeric characters and dot (.). For example: Endpoint.1.
Endpoint path doesn't support:
.) as the last character..)Null or empty inputThe maximum length of the processor matching condition is 1,500 characters.
The maximum lenght of the DQL processor script is 8,192 characters.
The Smartscape ID calculation supports string only. The ID components must be of type string.
Pre-process records to convert the values you need to the string data type.
OpenPipeline restricts the use of certain fields at configuration time. Some restrictions apply across all configuration scopes; others apply only within the configuration scope of a specific signal type.
Some fields are view-only, and others are available only in stages after the Processing stage.
The following fields can be viewed-only; editing via OpenPipeline is not supported.
dt.ingest.*dt.openpipeline.*dt.retain.*dt.system.*The following fields are added after the Processing stage when Dynatrace runs its entity detection. You can use them only in stages after the Processing stage, but not in pre-processing, routing, or the Processing stage.
dt.entity.aws_lambda_functiondt.entity.cloud_applicationdt.entity.cloud_application_instancedt.entity.cloud_application_namesdt.entity.custom_devicedt.entity.<genericEntityType>dt.entity.kubernetes_clusterdt.entity.kubernetes_nodedt.entity.kubernetes_servicedt.entity.servicedt.env_vars.dt_tagsdt.kubernetes.cluster.iddt.kubernetes.cluster.namedt.loadtest.custom_entity.enriched_custom_device_namedt.process.name1dt.source_entityk8s.cluster.name2Note that dt.process.name is available only in classic pipelines. To obtain equivalent results before the Processing stage, use dt.process_group.detected_name instead.
OneAgent version 1.309Dynatrace Operator version 1.4.2+The field is available before the Processing stage if OneAgent Log module is running in standalone mode.
These field restrictions apply to the metric configuration scope and are enforced in addition to any native limits of the metrics signal type.
Fields excluded from dynamic route matching conditions and in the Processing stage
dt.entity.*Fields excluded from the Processing stage
dt.system.monitoring_sourcemetric.keymetric.typetimestampvalueThese field restrictions apply in the spans configuration scope and are enforced in addition to any native limits of the spans signal type.
Fields excluded from dynamic route matching conditions and in the Processing stage
dt.entity.serviceendpoint.namefailure_detection.*request.is_failedrequest.is_root_spanservice_mesh.is_proxyservice_mesh.is_failedsupportability.*Fields excluded from the Processing stage
dt.ingest.sizedt.retain.sizedurationend_timespan.idstart_timetrace.idThese limits are enforced when records are processed in OpenPipeline and apply across all configuration scopes. Exceeding a processing limit can cause a record to be dropped or have further processing stopped.
Processing memory is limited. Each change to a record—for example, parsing a field—decreases the available processing memory. Once the available processing memory is exhausted, the record is dropped. This is reported in metric dt.sfm.openpipeline.not_stored.records with dimension reason set to buffer_overflow.
The maximum size of a record after processing is 16 MB.
Log attributes can be up to 32 KB in size. When log attributes are added to the event template, the size of each attribute is truncated to 4,096 characters.
You can extract data on a single record in a maximum of five different pipelines (dt. open pipeline.pipelines). Once the threshold is exceeded, data extraction is no longer performed on the single record. The record continues to be processed and persisted.
This processing-time validation applies to all logs processed in OpenPipeline and determines whether a processed record is stored. A processed log record is persisted only if all the following field conditions are satisfied. If the schema is not valid, the log is dropped.
| Field | Exists | Accepted Types | Value Constraints |
|---|---|---|---|
| Yes |
| Within the ingestion range |
| Yes |
| Not evaluated |