OpenPipeline processing allows you to normalize span and metric data to prevent high-cardinality issues that can make aggregations and analysis unusable.
The following use cases show how to reduce cardinality in three different views in
Services:
The Outbound calls tab displays aggregated metrics for external calls made by your service. High cardinality occurs when URLs contain unique identifiers in the path, such as /users/12345 or /orders/abc-def-123, which leads to the creation of many distinct values.
Processing rules help you transform these into normalized patterns, such as /users/* or /orders/*, optimizing your outbound call data.
Go to
Settings > Process and contextualize > OpenPipeline > Spans.
Go to the Pipelines tab. Select Pipeline and enter a name (for example, Outbound call normalization) to create a new pipeline.
Go to Processing > Processor > DQL and configure a new processing rule for reducing the cardinality of the URL.
Enter the following new DQL processor to normalize URLs:
span.kind == "client" and isNotNull(url.full)
fieldsAdd url.full.orig = url.full| fieldsAdd path_normalized = replacePattern(url.path, "UUIDSTRING", "[UUID]")| fieldsAdd path_normalized = replacePattern(path_normalized, "[/]LONG", "/[Number]")| fieldsAdd port = if(isNotNull(server.port), concat(":", server.port), else:null)| fieldsAdd url.full = concat(url.scheme, "://", server.address, port, path_normalized)
Now that we have defined and saved a processor, we can enable the processor by connecting it to OpenPipeline via a new dynamic route so that your pipeline receives span data.
On the Spans page, go to the Dynamic routing tab.
Select Dynamic route.
Define the dynamic route.
span.kind =="client" and isNotNull(url.full)
Select Save.
Redis statements often include unique identifiers or values, for example, GET user:12345, GET user:12346, and GET user:12347 or SET order:123, SET order:124, and SET order:125. This high cardinality results in thousands of distinct entries shown in the Database queries view in
Services.
Unlike parameterized SQL databases, where OneAgent or OpenTelemetry automatically handles normalization, Redis commands require manual cardinality handling via an OpenPipeline pipeline. In this section, we'll utilize a processing rule to transform these commands into normalized patterns such as GET or SET, making your Redis query data more actionable.
First, let's create a pipeline that contains a processing rule for reducing the cardinality of your Redis statements. As
Services utilizes the db.query.text attribute, you'll tweak this particular attribute to normalize the Redis statements displayed in the Database queries view.
To create a pipeline
Name: Redis
Matching condition:
db.system == "redis" and (isNotNull(db.statement) or isNotNull(db.query.text))
db.system == "redis": Match all redis database systems.
(isNotNull(db.statement) or isNotNull(db.query.text)): Require that either the old db.statement or the new db.query.text attribute is used in your Redis database spans.
db.statement is still common for some OpenTelemetry instrumentations.
DQL processor definition:
fieldsAdd db.query.text = coalesce(db.query.text, db.statement)| fieldsAdd db.query.text.orig = db.query.text| fieldsAdd blankPos = indexOf(db.query.text.orig, " ")| fieldsAdd db.query.text = if (blankPos > 0, substring(db.query.text, from: 0, to: blankPos), else: "*")
fieldsAdd db.query.text = coalesce(db.query.text, db.statement): Make sure that the matching condition works for both the old db.statement and the new db.query.text attribute.
Many OpenTelemetry instrumentations still use db.statement instead of db.query.text. This line ensures that both fields are respected, as coalesce() returns the first non-null argument.
fieldsAdd db.query.text.orig = db.query.text: Store the original value of the db.query.text attribute.
While the low cardinality of db.query.text is important in
Services, preserving the original value of the db.query.text attribute is quite beneficial and can be leveraged for further drill-downs.
fieldsAdd blankPos = indexOf(db.query.text.orig, " ") and fieldsAdd db.query.text = if (blankPos > 0, substring(db.query.text, from: 0, to: blankPos), else: "*"): Simplify the db.query.text attribute by extracting the new value from the start of the text up to the first blank space.
A Redis database query is the value until the first blank space of the statement, so this blank space is found, and then only the part until that blank space is kept as db.query.text.
You should see the DB statement normalization pipeline in the list of pipelines.
If you have other databases with high-cardinality database statements, add additional DQL processors to your DB statement normalization pipeline. Configure the processors according to your situation. For additional examples, see Explore other DQL processors.
Now that you have created a new pipeline and defined a cardinality-reducing DQL processor, activate this processor by connecting it to OpenPipeline via a dynamic route. This way, your pipeline can receive the span data.
To create a dynamic route
isNotNull(db.statement) and (isNotNull(db.statement) or isNotNull(db.query.text))
You should see the DB statement normalization dynamic route in the list of dynamic routes.
As mentioned before, you can add additional DQL processors when you encounter high-cardinality database queries.
Check the sections below for three additional examples of the DQL processor. When you add a new processor, set Matching condition to db.system == "redis" and (isNotNull(db.statement) or isNotNull(db.query.text)), and set DQL processor definition to the value provided in one of the examples below.
Before introducing a new DQL processor, you can use
Notebooks to check how a DQL processor transforms your data.
Notebooks.Use the following DQL query to count the original number of Redis statements (before cardinality reduction).
DQL query in
Notebooks:
fetch spans| filter db.system == "redis" and (isNotNull(db.statement) or isNotNull(db.query.text))| fieldsAdd db.query.text = coalesce(db.query.text, db.operation.name)| summarize count(), by: { db.query.text }| sort `count()` desc
This DQL processor replaces db.query.text with the first 15 characters + *. This option is good for quick cardinality reduction regardless of the content format.
DQL processor definition:
| fieldsAdd db.query.text = coalesce(db.query.text, db.operation.name)| fieldsAdd db.query.text.orig = db.query.text| fieldsAdd db.query.text = concat(substring(db.query.text, from: 0, to: 15), "*")
DQL query in
Notebooks:
fetch spans| filter db.system == "redis" and (isNotNull(db.statement) or isNotNull(db.query.text))| fieldsAdd db.query.text = coalesce(db.query.text, db.operation.name)| fieldsAdd db.query.text.orig = db.query.text| fieldsAdd db.query.text = concat(substring(db.query.text, from: 0, to: 15), "*")| summarize count(), by: { db.query.text }| sort `count()` desc
This DQL processor extracts the first word in db.query.text by cutting at the first space. This option is great to get only the Redis commands, for example, GET or SET.
DQL processor definition:
| fieldsAdd db.query.text = coalesce(db.query.text, db.operation.name)| fieldsAdd db.query.text.orig = db.query.text| fieldsAdd blankPos = indexOf(db.query.text.orig, " ")| fieldsAdd db.query.text = if(blankPos > 0, substring(db.query.text, from: 0, to: blankPos), else: "*")
DQL query in
Notebooks:
fetch spans| filter db.system == "redis" and (isNotNull(db.statement) or isNotNull(db.query.text))| fieldsAdd db.query.text = coalesce(db.query.text, db.operation.name)| fieldsAdd db.query.text.orig = db.query.text| fieldsAdd blankPos = indexOf(db.query.text.orig, " ")| fieldsAdd db.query.text = if(blankPos > 0, substring(db.query.text, from: 0, to: blankPos), else: "*")| summarize count(), by: { db.query.text }| sort `count()` desc
This DQL processor sets db.query.text to the value of db.operation.name. Use this option when you want to see a Redis operation instead of a database query text.
DQL processor definition:
| fieldsAdd db.query.text = coalesce(db.query.text, db.operation.name)| fieldsAdd db.query.text.orig = db.query.text| fieldsAdd db.query.text = if(isNotNull(db.operation.name), db.operation.name, else: "*")
DQL query in
Notebooks:
fetch spans| filter db.system == "redis" and (isNotNull(db.statement) or isNotNull(db.query.text))| fieldsAdd db.query.text = coalesce(db.query.text, db.operation.name)| fieldsAdd db.query.text.orig = db.query.text| fieldsAdd db.query.text = if(isNotNull(db.operation.name), db.operation.name, else: "*")| summarize count(), by: { db.query.text }| sort `count()` desc
After you've created the DB statement normalization pipeline and activated it by creating a dynamic route, you should see that the cardinality of your Redis statements has significantly reduced when you check the Database queries view in
Services. For example, statements like GET as:1:swuq:abc-677d3b, SET as:1:rl:wf:d1d42f, or DECRBY as:1:paec:wis70158 now appear as GET, SET, and DECRBY. Such normalized Redis queries allow for easier and more effective analysis.
Services includes a Message processing view that aggregates metrics for messaging operations. High cardinality occurs when temporary queues are created with unique identifiers in their names (such as amq.gen-6dggtCpu, async-job-2jrmsi5y, or orders-reply-2n68vy4g), generating thousands of distinct queue names that make aggregations unusable.
Most instrumentations keep the cardinality of messaging.destination.name low by using non-standard fields like messaging.temp.queue.hash for high-cardinality data or by setting messaging.destination.temporary. However, when instrumentation doesn't follow these practices, OpenPipeline processing rules can normalize temporary queue names into patterns or flag them as temporary.
Before implementing normalization rules, query your spans to identify messaging systems with high percentages of unique destination names.
Go to
Notebooks and select Notebooks to create a new notebook.
Select New section > DQL.
Copy and paste the following query into the edit box and select Run.
fetch spans| filter isNotNull(messaging.system) and isNotNull(messaging.destination.name)| summarize count=count(), distinctCount=countDistinct(messaging.destination.name), by:{messaging.system, messaging.destination.temporary}| fieldsAdd cardinality_ratio = toDouble(distinctCount) / toDouble(count)
Examine the results for high cardinality ratios.
Systems showing high cardinality ratios (above 0.5) without messaging.destination.temporary set indicate queues that would:
You can use OpenPipeline processing rules to normalize temporary queue names into patterns or flag them as temporary.
To create a rule
Go to
Settings and select Process and contextualize > OpenPipeline > Spans.
Go to the Pipelines tab and create a new pipeline by selecting Pipeline and entering a name (for example, Queue handling).
Choose whether to normalize temporary queue names into patterns or flag them as temporary.
On the Processing tab, select Processor and choose DQL.
To add/override the temporary queue flag, define the following new DQL processor:
Name: Temporary queue selector (or any name you like)
Matching Condition: The following matches all messaging spans that were detected as not temporary and match the specific destination pattern odaRequestQueue* that we want to override to be considered temporary.
messaging.destination.temporary == false andmatchesPhrase(messaging.destination.name, "odaRequestQueue*")
DQL processor definition: The only action to perform is to overwrite the existing value from false to true.
fieldsAdd messaging.destination.temporary = true
Select Save.
Now that we have defined and saved a processor, we can enable the processor by connecting it to OpenPipeline via a new dynamic route, so that your pipeline receives span data.
Still on the Spans page, go to the Dynamic routing tab.
Select Dynamic route.
Define the dynamic route.
odaRequestQueue.
matchesPhrase(messaging.destination.name, "odaRequestQueue*")
Select Save.
After applying these rules, queues with high cardinality will either have messaging.destination.temporary set to true or normalized names, significantly reducing metric cardinality in the message processing view. To verify this, see How to identify high cardinality above.
Services