Emerging AI regulations such as the European Union Artificial Intelligence Act provide the means deploy a comprehensive strategy combining organizational and AI model oversight, covering everything from model training to AI/user interactions.
When running your AI models through Amazon Bedrock, Dynatrace helps you to comply with regulatory record-keeping requirements.
In this tutorial, we first configure your model training and deployment observability. Afterward, we configure your application to observe user inference requests.
The general steps are as follows:
See below for the details of each step.
In this step, we create a Dynatrace token and we configure OpenPipeline to retain the data for 5+ years.
To create a Dynatrace token
bizevents.ingest
)metrics.ingest
)logs.ingest
)events.ingest
)openTelemetryTrace.ingest
)metrics.read
)settings.write
)You can only access your token once upon creation. You can't reveal it afterward.
The default retention period for BizEvents is 35 days. Depending on the regulations, this might not be enough.
To change the retention period, you can create a custom Grail bucket.
gen_ai_events
)1,825
, which is about 5 years)bizevents
When the bucket is available, we can configure OpenPipeline to redirect AI-relevant events to storage there.
AI Data Governance
).true
Finally, we route the ingestion of AI events to the pipeline.
AI Event Routing
)matchesValue(event.type,"gen_ai.auditing")
Finally, to mark it as the first pipeline to trigger, drag it up to be the first row in the table.
Amazon Bedrock emits events for every configuration action executed, such as when you deploy a new model or when the fine-tuning of your model finishes.
We can set up a rule to forward these events to Dynatrace. Please refer to our integration with Amazon EventBridge using BizEvent to configure the rule.
The only change is in the InputTemplate
field, where the property "type"
should be set to gen_ai.auditing
. This change is required to match the values that OpenPipeline uses to redirect the events to our Grail bucket.
We can leverage OpenTelemetry to provide auto-instrumentation that collects traces and metrics of your AI workloads, particularly our fork of OpenLLMetry.
The libraries utilized in this sample use case are currently under development and are in an alpha state. They may contain bugs or undergo significant changes. Use at your own risk. We highly value your feedback to improve these libraries. Please report any issues, bugs, or suggestions on our GitHub issues page.
To install, use the following command:
pip install -i https://test.pypi.org/simple/ dynatrace-openllmetry-sdk==0.0.1a4
Afterward, add the following code at the beginning of your main file:
from traceloop.sdk import Traceloopheaders = { "Authorization": "Api-Token <YOUR_DT_API_TOKEN>" }Traceloop.init(app_name="<your-service>",api_endpoint="https://<YOUR_ENV>.live.dynatrace.com/api/v2/otlp",headers=headers)
And that's it!
Now you can: