Dynatrace Operator version 1.6+
Enable Dynatrace telemetry endpoints in Kubernetes for cluster-local data ingest.
openTelemetryTrace.ingest
, logs.ingest
, and metrics.ingest
and must be provided via the dataIngestToken
field in the same secret as the API token.The following two steps explain how to setup and use telemetry ingest endpoints.
To enable telemetry ingest endpoints, specify a list of desired protocols in the DynaKube field .spec.telemetryIngest.protocols
. Please find more information about the exact values in our DynaKube reference documentation.
With an ActiveGate running in the Kubernetes cluster, the OpenTelemetry Collector will be configured to route all ingested data through the in-cluster ActiveGate instead of connecting directly to a public ActiveGate. Additionally, the capabilities required for telemetry ingest will automatically be enabled.
If no in-cluster ActiveGate is deployed (i.e., .spec.activeGate
is not specified), the OpenTelemetry Collector will be configured to communicate directly with your Dynatrace tenant.
apiVersion: dynatrace.com/v1beta4kind: DynaKubemetadata:name: dynakubenamespace: dynatracespec:apiUrl: https://ENVIRONMENTID.live.dynatrace.com/apiactiveGate:capabilities:- kubernetes-monitoringreplicas: 1resources:requests:cpu: 500mmemory: 1.5Gilimits:cpu: 1000mmemory: 1.5GitelemetryIngest:protocols:- jaeger- zipkin- otlp- statsdtemplates:otelCollector:imageRef:repository: public.ecr.aws/dynatrace/dynatrace-otel-collectortag: <tag>
OTel collector image is sourced from our supported public registries, make sure the used tag
exists! Alternatively, you can use your private registry.
Once the DynaKube is applied, the Dynatrace Operator will deploy the Dynatrace OpenTelemetry Collector with the default image (configurable using .spec.templates.otelCollector.imageRef
) and a Kubernetes service named <dynakube-name>-telemetry-ingest.dynatrace
(configurable using .spec.telemetryIngest.serviceName
) for telemetry ingest. The used port number depends on the protocol your application supports. To find the respective port numbers, please see the reference below.
The following snippet shows how you can configure an application using an environment variable that is instrumented with the OpenTelemetry SDK:
env:- name: OTEL_EXPORTER_OTLP_ENDPOINTvalue: http://dynakube-telemetry-ingest.dynatrace.svc:4317
The following ports are open for telemetry data ingestion:
By default, the ingest endpoints operate in HTTP mode. If you want to encrypt the telemetry traffic by using HTTPS, you can reference a Kubernetes TLS secret via .spec.telemetryIngest.tlsRefName
. The ingest endpoints will then be configured to use referenced certificates and listen for HTTPS.
The following snippet shows how you can configure an application that is instrumented with the OpenTelemetry SDK through the environment variable:
env:- name: OTEL_EXPORTER_OTLP_ENDPOINTvalue: https://dynakube-telemetry-ingest.dynatrace.svc:4318
By default, the service name for telemetry ingest is <dynakube-name>-telemetry-ingest.dynatrace
. The service name can be customized by setting .spec.telemetryIngest.serviceName
. The provided value will be used as a service name, but the services will still be in the namespace of the DynaKube, which is where also the Dynatrace OpenTelemetry Collector is deployed.
Be aware that having multiple DynaKubes with the same service name will cause service name collisions.
The endpoints are available at http://my-telemetry-service.dynatrace:<port>
.
apiVersion: dynatrace.com/v1beta4kind: DynaKubemetadata:name: dynakubenamespace: dynatracespec:apiUrl: https://ENVIRONMENTID.live.dynatrace.com/apitelemetryIngest:serviceName: my-telemetry-serviceprotocols:- jaeger- zipkin- otlp- statsdtemplates:otelCollector:imageRef:repository: public.ecr.aws/dynatrace/dynatrace-otel-collectortag: <tag>
OTel collector image is sourced from our supported public registries, make sure the used tag
exists! Alternatively, you can use your private registry.
Any proxy specified in .spec.proxy
will be propagated to the OpenTelemetry Collector via environment variables HTTP_PROXY
and HTTPS_PROXY
. If an in-cluster ActiveGate is used, the URL of the in-cluster ActiveGate will automatically be added to the NO_PROXY
environment variable to avoid unnecessary communication loops.
If you need to use certificates for proxy communication, they can be specified in .spec.trustedCAs
. System CAs from the OpenTelemetry Collector container image are loaded together with CAs in trustedCAs
. The system CAs contain the certificates required for communication with public ActiveGates.
When telemetry ingest is used with an in-cluster ActiveGate, ingested data is buffered on a PersistentVolume on the ActiveGate until data has been transfered successfully. For this purpose, a PersistentVolumeClaim is mounted to the ActiveGate. The following example illustrates the default PVC configured to the ActiveGate by the operator if no custom PVC is specified:
apiVersion: v1kind: PersistentVolumeClaimmetadata:name: <ActiveGate-name>namespace: dynatracespec:accessModes:- ReadWriteOnceresources:requests:storage: 1Gi
Please ensure a default storage class is defined. Otherwise, the PersistentVolumeClaim of the ActiveGate cannot be provisioned.
A custom PersistentVolumeClaim can be configured in .spec.activegate.volumeClaimTemplate
.
For test purposes, a PVC can be replaced by local ephemeral storage using .spec.activeGate.useEphemeralVolume
.
Using .spec.activeGate.useEphemeralVolume
is not recommended for production environments.
If an ActiveGate is shut down (for example, in scale-in scenarios), it needs some time to flush its buffers by sending all the buffered data to Dynatrace. In large environments, this can take some time and Kubernetes could potentially terminate the ActiveGate too early. To expand the so-called termination grace period, you can increase the duration via '.spec.activegate.terminationGracePeriodSeconds' to give the ActiveGate pod more time to gracefully shut down.