Log Monitoring API - POST ingest logs
Pushes custom logs to Dynatrace.
This endpoint is available in your SaaS environment, or as an alternative, you can expose it on an Environmental ActiveGate with the Log analytics collector module enabled. This module is enabled by default on all of your ActiveGates.
The request consumes one of the following payload types:
text/plain
—limited to a single log event.application/json
—supports multiple log events in a single payload.
Be sure to set the correct Content-Type header and encode payload with UTF-8: application/json; charset=utf-8
.
POST | ManagedDynatrace for Government | https://{your-domain}/e/{your-environment-id}/api/v2/logs/ingest |
Environment and Cluster ActiveGate (default port 9999) | https://{your-activegate-domain}:9999/e/{your-environment-id}/api/v2/logs/ingest |
Authentication
To execute this request, you need an access token with logs.ingest
scope.
To learn how to obtain and use it, see Tokens and authentication.
Parameters
When using log processing with the custom processing pipeline (OpenPipeline), ingest supports all JSON data types for attribute values. This requires SaaS version 1.295+ when using the SaaS API endpoint or ActiveGate version 1.295+ when using the ActiveGate API endpoint. In all other cases, all ingested values are converted to the string type.
The body of the request. Contains one or more log events to be ingested.
The endpoint accepts one of the following payload types, defined by the Accept header:
text/plain
: supports only one log event.application/json
: supports multiple log events in a single payload.
Request body objects
The LogMessageJson
object
The log message in JSON format. Use one object representing a single event or an array of objects representing multiple events.
The object might contain the following types of keys (the possible key values are listed below):
- Timestamp:
- The earliest timestamp for a log event is the current time minus 24 hours. If the log event contains a timestamp earlier than the current time minus 24 hours, the event is dropped.
- The timestamp for a log event is not limited to future time. If the log event contains a timestamp later than 10 minutes in the future, the timestamp of the event will be overridden by the current time on the server.
- The following formats are supported: UTC milliseconds, RFC3339, and RFC3164. For the missing timestamp, the current timestamp is used. For the unsupported timestamp format, the current timestamp is used, and the value of the unsupported timestamp format is stored in the
unparsed_timestamp
attribute (for Log Monitoring Classic, this attribute isn't indexed).
- Severity. If not set,
NONE
is used. - Content. If the content key is not set, the whole JSON is parsed as the content.
- Attributes. * Logs on Grail with OpenPipeline custom processing (Dynatrace SaaS version 1.295+, Environment ActiveGate version 1.295+): All JSON data types (string, number, boolean, null) are supported. All attributes are indexed and can be used in queries. Keys are case-sensitive.
- Logs on Grail without OpenPipeline custom processing: Only values of the
string
type are supported. All attributes are indexed and can be used in queries. Keys are case-insensitive (lowercased). - Log Monitoring Classic: Only values of the
string
type are supported. Semantic attributes are indexed and can be used in queries. If an unsupported key occurs, it is not indexed and can't be used in indexing and aggregations. Keys are case-insensitive (lowercased).
- Logs on Grail without OpenPipeline custom processing: Only values of the
- Semantic attributes. are displayed in aggregations (facets) in Log and Events Viewer. Refer to the Semantic Dictionary documentation page for more details.
- Automatic attribute. The dt.auth.origin attribute is automatically added to every log record ingested via API. This attribute is the public part of the API key that the log source authorizes to connect to the generic log ingest API.
Attributes structure
Complex objects are flattened facilitating ease of handling, and a simpler representation. The following guidelines outline the process:
-
The keys are concatenated using a dot (.) until a simple value is reached in the hierarchy. For example:
Base JSON:
{"test": { "attribute": {"one": "value 1", "two": "value 2"}}}
Result:
{"test.attribute.one": "value 1", "test.attribute.two": "value 2" }
-
When an array is encountered, a multi-value attribute is created at that level. If there are non-simple values within the array, the JSON stringified value is maintained.
-
Name conflicts are resolved as follows:
-
In case of a name conflict, where a key is overwritten, it is prefixed with "overwritten". For example:
Base JSON:
{"host.name": "abc", "host": { "name": "xyz"}}
Result:
{"host.name": "abc", "overwritten1.host.name": "xyz"}
-
If a second conflict arises, an index is added starting with 1:
Base JSON:
{"service.instance.id": "abc", "service": { "instance.id": "xyz", "instance": { "id": "123"}}}
Result:
{"service.instance.id": "abc", "overwritten1.service.instance.id": "xyz", "overwritten2.service.instance.id": "123" }
Limitations
The object value can be a single constant or an array of constants. The length of the value is limited. Any content exceeding the limit is trimmed. Default limits:
- Attributes: up to 50 attributes.
- Content: Grail tenants: 65,536 bytes, other tenants: 8,192 bytes.
- Semantic attribute: 250 bytes per value, up to 32 attribute values.
- The maximum payload size of a single request is 5 MB.
- 50000 log events per payload.
Supported timestamp keys:
- @timestamp
- _timestamp
- date
- eventtime
- published_date
- syslog.timestamp
- timestamp
Supported content keys:
- body
- content
- message
- payload
Supported severity keys:
- level
- loglevel
- severity
- status
- syslog.severity
Supported semantic attribute keys:
- audit.action
- audit.identity
- audit.result
- aws.account.id
- aws.arn
- aws.log_group
- aws.log_stream
- aws.region
- aws.resource.id
- aws.resource.type
- aws.service
- azure.location
- azure.resource.group
- azure.resource.id
- azure.resource.name
- azure.resource.type
- azure.subscription
- cloud.account.id
- cloud.availability_zone
- cloud.provider
- cloud.region
- container.image.name
- container.image.tag
- container.name
- db.cassandra.keyspace
- db.connection_string
- db.hbase.namespace
- db.jdbc.driver_classname
- db.mongodb.collection
- db.mssql.instance_name
- db.name
- db.operation
- db.redis.database_index
- db.statement
- db.system
- db.user
- device.address
- dt.active_gate.group.name
- dt.active_gate.id
- dt.code.filepath
- dt.code.func
- dt.code.lineno
- dt.code.ns
- dt.ctg.calltype
- dt.ctg.extendmode
- dt.ctg.gatewayurl
- dt.ctg.program
- dt.ctg.rc
- dt.ctg.requesttype
- dt.ctg.serverid
- dt.ctg.termid
- dt.ctg.transid
- dt.ctg.userid
- dt.entity.cloud_application
- dt.entity.cloud_application_instance
- dt.entity.cloud_application_namespace
- dt.entity.container_group
- dt.entity.container_group_instance
- dt.entity.custom_device
- dt.entity.host
- dt.entity.host_group
- dt.entity.kubernetes_cluster
- dt.entity.kubernetes_node
- dt.entity.process_group
- dt.entity.process_group_instance
- dt.entity.service
- dt.event.group_label
- dt.event.key
- dt.events.root_cause_relevant
- dt.exception.messages
- dt.exception.serialized_stacktraces
- dt.exception.types
- dt.extension.config.id
- dt.extension.ds
- dt.extension.name
- dt.extension.status
- dt.host.ip
- dt.host.smfid
- dt.host.snaid
- dt.host_group.id
- dt.http.application_id
- dt.http.context_root
- dt.ingest.debug_messages
- dt.ingest.warnings
- dt.kubernetes.cluster.id
- dt.kubernetes.cluster.name
- dt.kubernetes.config.id
- dt.kubernetes.event.involved_object.kind
- dt.kubernetes.event.involved_object.name
- dt.kubernetes.event.reason
- dt.kubernetes.node.name
- dt.kubernetes.node.system_uuid
- dt.kubernetes.topmost_controller.kind
- dt.kubernetes.workload.kind
- dt.kubernetes.workload.name
- dt.network_zone.id
- dt.os.description
- dt.os.type
- dt.process.commandline
- dt.process.executable
- dt.process.name
- dt.security_context
- dt.source_entity
- dt.source_entity_name
- dt.source_entity_type
- event.unique_identifier
- faas.id
- faas.instance
- faas.name
- faas.version
- gcp.instance.id
- gcp.instance.name
- gcp.project.id
- gcp.region
- gcp.resource.type
- geo.city_name
- geo.country_name
- geo.name
- geo.region_name
- host.hostname
- host.id
- host.image.id
- host.image.name
- host.image.version
- host.name
- host.type
- http.client_ip
- http.flavor
- http.host
- http.method
- http.route
- http.scheme
- http.server_name
- http.status_code
- http.status_text
- http.target
- http.url
- k8s.cluster.name
- k8s.container.name
- k8s.cronjob.name
- k8s.cronjob.uid
- k8s.daemonset.name
- k8s.daemonset.uid
- k8s.deployment.name
- k8s.deployment.uid
- k8s.job.name
- k8s.job.uid
- k8s.namespace.name
- k8s.pod.name
- k8s.pod.uid
- k8s.replicaset.name
- k8s.replicaset.uid
- k8s.statefulset.name
- k8s.statefulset.uid
- log.source
- log.source.origin
- net.host.ip
- net.host.name
- net.host.port
- net.peer.ip
- net.peer.name
- net.peer.port
- net.transport
- process.technology
- service.instance.id
- service.name
- service.namespace
- service.version
- snmp.trap_oid
- span_id
- trace_id
- winlog.eventid
- winlog.level
- winlog.opcode
- winlog.provider
- winlog.task
Request body JSON model
This is a model of the request body, showing the possible elements. It has to be adjusted for usage in an actual request.
[{"content": "Exception: Custom error log sent via Generic Log Ingest","log.source": "/var/log/syslog","timestamp": "2022-01-17T22:12:31.0000","severity": "error","custom.attribute": "attribute value"},{"content": "Exception: Custom error log sent via Generic Log Ingest","log.source": "/var/log/syslog","timestamp": "2022-01-17T22:12:35.0000"},{"content": "Exception: Custom error log sent via Generic Log Ingest","log.source": "/var/log/syslog"},{"content": "Exception: Custom error log sent via Generic Log Ingest"}]
Response
Response codes
Only a part of input events were ingested due to event invalidity. For details, check the response body.
Success. Response doesn't have a body.
Failed. This is due either to the status of your licensing agreement or because you've exhausted your DPS license.
Failed. The requested resource doesn't exist. This may happen when no ActiveGate is available with the Log Analytics Collector module enabled.
Failed. Request payload size is too big. This may happen when the payload byte size exceeds the limit or when the ingested payload is a JSON array with the size exceeding the limit.
Failed. Too Many Requests. This may happen when ActiveGate is unable to process more requests at the moment or when log ingest is disabled.
Failed. The server either does not recognize the request method, or it lacks the ability to fulfil the request. In Log Monitoring Classic, this may happen when indexed log storage is not enabled.
Failed. The server is currently unable to handle the request. This may happen when ActiveGate is overloaded.
Response body objects
The SuccessEnvelope
object
The Success
object
The HTTP status code
Detailed message
Response body JSON model
{"details": {"code": 1,"message": "string"}}