Ingests OpenTelemetry logs into Dynatrace. Use this endpoint as a target for OpenTelemetry exporters. For more information, see Dynatrace OTLP API endpoints.
The request consumes an application/x-protobuf payload.
| POST | SaaS | https://{your-environment-id}.live.dynatrace.com/api/v2/otlp/v1/logs |
| Environment ActiveGateCluster ActiveGate | https://{your-activegate-domain}:9999/e/{your-environment-id}/api/v2/otlp/v1/logs |
To execute this request, you need an access token with logs.ingest scope.
To learn how to obtain and use it, see Tokens and authentication.
When using log processing with the custom processing pipeline (OpenPipeline), ingest supports all JSON data types for attribute values. This requires SaaS version 1.295+ when using the SaaS API endpoint or ActiveGate version 1.295+ when using the ActiveGate API endpoint. In all other cases, all ingested values are converted to the string type.
| Parameter | Type | Description | In | Required |
|---|---|---|---|---|
| structure | string | (Optional) (SaaS only) Data model used for structuring the input into log records. For more details, refer to the documentation. The element can hold these values
| query | Optional |
| X-Dynatrace-Options | string | (Optional) Contains ampersand-separated Dynatrace-specific parameters. Query parameter takes precedence over the header value. For more details, refer to the documentation. | header | Optional |
| body | Export | An ExportLogsServiceRequest message in binary protobuf format. | body | Required |
ExportLogsServiceRequest objectStandard ExportLogsServiceRequest protobuf request defined in the official OpenTelemetry specification as the input type for the LogsService/Export RPC.
Each input log record has the following properties:
These fields are mapped to a single Dynatrace log record containing three special attributes: timestamp, loglevel, and content, as well as a map of other attributes. These four properties are set based on keys present in the input log records as follows:
timestamp:
Set based on the Timestamp field of the input log record. See the differences between data models below for more details.
Log events older than the Log Age limit are discarded. Timestamps more than 10 minutes ahead of the current time are replaced with the current time. See the Limitations section below for details.
The default value is the current timestamp.
loglevel:
Set based on the SeverityText field (1st priority) or SeverityNumber field (2nd priority) of the input log record. See the differences between data models below for more details.
The default value is NONE.
content:
Set based on the Body field of the input log record.
If the Body field is not a string type, the value is stringified. In case of complex types, it is stringified as a JSON string. For kvlist_value type, see the differences between data models below for more details.
attributes:
Contains all other attributes from the input record's attributes contained in the sections: Resource, InstrumentationScope, and Attributes.
All attributes should preferably map to semantic attributes for Dynatrace to interpret them correctly. See the list of Supported semantic attribute keys below. Refer to the Semantic Dictionary documentation page for more details.
See the sections below for additional details on attribute processing and limitations.
Attribute Processing
Attribute processing differs depending on tenant and environment type:
Logs on Grail with OpenPipeline custom processing (Dynatrace SaaS version 1.295+, Environment ActiveGate version 1.295+): All JSON data types (string, number, boolean, null) are supported. All attributes can be used in queries. Keys are case-sensitive.
Logs on Grail with OpenPipeline routed to Classic Pipeline: All attribute keys are lowercased and all attribute values are stringified. All attributes can be used in queries.
Log Monitoring Classic: All attribute keys are lowercased and all attribute values are stringified. Custom attributes and semantic attributes can generally be used in queries.
Attribute processing also depends on the data model used for input processing. The effective data model for a specific request depends on the structure parameter or the default tenant data model, which is determined by tenant configuration. More details can be found in the data models documentation.
Data Model: Raw
This data model is relevant only for SaaS tenants.
Attributes with complex (JSON) values are converted to JSON strings. For example:
Input log record attribute in OTLP JSON encoding:
KeyValue {key: "test"value: {kvlist_value: {values: [{key: "attribute"value: {kvlist_value: {values: [{ key: "one", value: { string_value: "value 1" } },{ key: "two", value: { string_value: "value 2" } }}}}}]}}}
Result in Dynatrace record:
"test": "{ "attribute": "{\"one\": \"value 1\", \"two\": \"value 2\"}}"}`
Content-related behavior: If the Body field is of kvlist_value type (a list of key-value pairs), the structure is stringified as JSON string and put into the content attribute.
Data Model: Flattened
For Managed, this is the only supported data model.
Complex attribute values are flattened. The following guidelines outline the process:
Attributes with complex values are flattened, i.e., replaced with keys concatenated using a dot (.) until a simple value is reached in the hierarchy. For example:
Input log record attribute in OTLP JSON encoding:
KeyValue {key: "test"value: {kvlist_value: {values: [{key: "attribute"value: {kvlist_value: {values: [{ key: "one", value: { string_value: "value 1" } },{ key: "two", value: { string_value: "value 2" } }]}}}]}}}
Result in Dynatrace record:
"test.attribute.one" = "value 1""test.attribute.two" = "value 2"
Flattening proceeds up to the maximum nesting level specified by the Nested objects limit. Structures nested deeper than this are replaced with the string value <truncated due to nesting limit>. See the Limitations section below for details.
Name conflicts are resolved as follows:
In case of a name conflict, where a key is overwritten, it is prefixed with "overwritten".
If a second conflict arises, an index is added starting with 1. Resource attributes have higher priority than scope attributes, and scope attributes have higher priority than log record attributes.
Content-related behavior:
If the Body field is of kvlist_value type (a list of key-value pairs), the structure is processed in the same way as log record attributes, including flattening and conflict resolution.
Attributes found in Body may also be used for setting the timestamp, loglevel, and content attributes of the log record, as described below.
If the timestamp cannot be set based on the Timestamp field, the first of the following keys found in Body is used:
timestamp
@timestamp
_timestamp
eventtime
date
published_date
syslog.timestamp
Supported timestamp formats: UTC milliseconds, RFC3339, and RFC3164.
The default value is the current timestamp and the default timezone is UTC if it's missing in timestamp.
If the loglevel cannot be set based on the Severity field, the first of the following keys found in Body is used:
loglevel
status
severity
level
syslog.severity
The content field is set based on the first of the following keys found in Body is used:
content
message
payload
body
log
If no content attribute is found among supported content keys, the content attribute is set to an empty string.
Data Model Independent Behavior
Array attribute values are converted to arrays of a uniform type. The target type is chosen according to the following rules:
Complex values (such as arrays or objects) are mapped to JSON string values.
If any value in the array is a string, or if any value must be converted to a string (e.g., an object or array), the target type of the entire array is string.
If all values in the source array are numeric, the target array type is numeric.
Null values are considered compatible with any type.
Byte arrays are converted to base64-encoded strings.
TraceID and SpanID attributes are mapped to the trace_id and span_id fields, and their values are converted to hexadecimal representation (e.g., 0xCAFEBABE).
Automatic attribute. The dt.auth.origin attribute is automatically added to every log record ingested via API. This attribute is the public part of the API key that the log source authorizes to connect to the generic log ingest API.
Limitations
Please refer to the following documentation pages:
Supported semantic attribute keys:
audit.action
audit.identity
audit.result
aws.account.id
aws.arn
aws.log_group
aws.log_stream
aws.region
aws.resource.id
aws.resource.type
aws.service
azure.location
azure.resource.group
azure.resource.id
azure.resource.name
azure.resource.type
azure.subscription
cloud.account.id
cloud.availability_zone
cloud.provider
cloud.region
container.image.name
container.image.tag
container.name
db.cassandra.keyspace
db.connection_string
db.hbase.namespace
db.jdbc.driver_classname
db.mongodb.collection
db.mssql.instance_name
db.name
db.operation
db.redis.database_index
db.statement
db.system
db.user
device.address
dt.active_gate.group.name
dt.active_gate.id
dt.code.filepath
dt.code.func
dt.code.lineno
dt.code.ns
dt.ctg.calltype
dt.ctg.extendmode
dt.ctg.gatewayurl
dt.ctg.program
dt.ctg.rc
dt.ctg.requesttype
dt.ctg.serverid
dt.ctg.termid
dt.ctg.transid
dt.ctg.userid
dt.entity.cloud_application
dt.entity.cloud_application_instance
dt.entity.cloud_application_namespace
dt.entity.container_group
dt.entity.container_group_instance
dt.entity.custom_device
dt.entity.host
dt.entity.host_group
dt.entity.kubernetes_cluster
dt.entity.kubernetes_node
dt.entity.process_group
dt.entity.process_group_instance
dt.entity.service
dt.event.group_label
dt.event.key
dt.events.root_cause_relevant
dt.exception.messages
dt.exception.serialized_stacktraces
dt.exception.types
dt.extension.config.id
dt.extension.ds
dt.extension.name
dt.extension.status
dt.host.ip
dt.host.smfid
dt.host.snaid
dt.host_group.id
dt.http.application_id
dt.http.context_root
dt.ingest.debug_messages
dt.ingest.warnings
dt.kubernetes.cluster.id
dt.kubernetes.cluster.name
dt.kubernetes.config.id
dt.kubernetes.event.involved_object.kind
dt.kubernetes.event.involved_object.name
dt.kubernetes.event.reason
dt.kubernetes.node.name
dt.kubernetes.node.system_uuid
dt.kubernetes.topmost_controller.kind
dt.kubernetes.workload.kind
dt.kubernetes.workload.name
dt.network_zone.id
dt.openpipeline.source
dt.os.description
dt.os.type
dt.process.commandline
dt.process.executable
dt.process.name
dt.security_context
dt.source_entity
dt.source_entity_name
dt.source_entity_type
event.unique_identifier
faas.id
faas.instance
faas.name
faas.version
gcp.instance.id
gcp.instance.name
gcp.project.id
gcp.region
gcp.resource.type
geo.city_name
geo.country_name
geo.name
geo.region_name
host.hostname
host.id
host.image.id
host.image.name
host.image.version
host.name
host.type
http.client_ip
http.flavor
http.host
http.method
http.route
http.scheme
http.server_name
http.status_code
http.status_text
http.target
http.url
journald.unit
k8s.cluster.name
k8s.cluster.uid
k8s.container.name
k8s.cronjob.name
k8s.cronjob.uid
k8s.daemonset.name
k8s.daemonset.uid
k8s.deployment.name
k8s.deployment.uid
k8s.job.name
k8s.job.uid
k8s.namespace.name
k8s.node.name
k8s.pod.name
k8s.pod.uid
k8s.replicaset.name
k8s.replicaset.uid
k8s.service.name
k8s.statefulset.name
k8s.statefulset.uid
k8s.workload.kind
k8s.workload.name
log.source
log.source.origin
net.host.ip
net.host.name
net.host.port
net.peer.ip
net.peer.name
net.peer.port
net.transport
otel.scope.name
process.technology
service.instance.id
service.name
service.namespace
service.version
snmp.trap_oid
span_id
trace_id
winlog.eventid
winlog.keywords
winlog.level
winlog.opcode
winlog.provider
winlog.task
winlog.username
| Code | Type | Description |
|---|---|---|
| 200 | - | The request has been successfully accepted or partially accepted (i.e. when the server accepts only parts of the data and rejects the rest). |
| 400 | - | The request could not be processed. This may happen if the message is malformed. |
| 413 | - | The OTLP message exceeded the payload size limit. Retryable with exponential backoff strategy. |
| 500 | - | The request could not be processed due to an internal server error. |
| 502 | - | Failed. Bad Gateway. This may happen if an intermediate system (e.g., ActiveGate or a proxy) encounters an issue while forwarding the request. Retryable with exponential backoff strategy. |
| 503 | - | The service is currently unavailable. Retryable with exponential backoff strategy. |
| 504 | - | Failed. Gateway Timeout. This may occur due to an issue in the underlying infrastructure causing a delay in processing the request. Retryable with exponential backoff strategy. |
| 4XX | Error | Client side error. |
| 5XX | Error | Server side error. |