Kong AI Gateway

  • Concept
  • 2-min read
  • Published Jan 19, 2025

The Kong AI Gateway is a set of features built on top of Kong Gateway, designed to help developers and organizations adopt AI capabilities quickly and securely. It provides a normalized API layer that allows clients to consume multiple AI services from the same client code base.

Kong-dashboard

Explore the sample dashboard on the Dynatrace Playground.

Enable monitoring

Ensure that the Kong Prometheus plugin is enabled and exposes AI LLM metrics.

Follow the Set up Dynatrace on Kubernetes guide to monitor your cluster.

Afterwards, add the following annotations to your Kong Deployments:

  • metrics.dynatrace.com/scrape: "true"
  • metrics.dynatrace.com/port: "8100"

Spans

The following attributes are available for GenAI Spans.

AttributeTypeDescription
gen_ai.completion.0.contentstringThe full response received from the GenAI model.
gen_ai.completion.0.content_filter_resultsstringThe filter results of the response received from the GenAI model.
gen_ai.completion.0.finish_reasonstringThe reason the GenAI model stopped producing tokens.
gen_ai.completion.0.rolestringThe role used by the GenAI model.
gen_ai.openai.api_basestringGenAI server address.
gen_ai.openai.api_versionstringGenAI API version.
gen_ai.openai.system_fingerprintstringThe fingerprint of the response generated by the GenAI model.
gen_ai.prompt.0.contentstringThe full prompt sent to the GenAI model.
gen_ai.prompt.0.rolestringThe role setting for the GenAI request.
gen_ai.prompt.prompt_filter_resultsstringThe filter results of the prompt sent to the GenAI model.
gen_ai.request.max_tokensintegerThe maximum number of tokens the model generates for a request.
gen_ai.request.modelstringThe name of the GenAI model a request is being made to.
gen_ai.request.temperaturedoubleThe temperature setting for the GenAI request.
gen_ai.request.top_pdoubleThe top_p sampling setting for the GenAI request.
gen_ai.response.modelstringThe name of the model that generated the response.
gen_ai.systemstringThe GenAI product as identified by the client or server instrumentation.
gen_ai.usage.completion_tokensintegerThe number of tokens used in the GenAI response (completion).
gen_ai.usage.prompt_tokensintegerThe number of tokens used in the GenAI input (prompt).
llm.request.typestringThe type of the operation being performed.

Metrics

After following the steps above, the following metrics will be available:

MetricTypeUnitDescription
ai_llm_requests_totalcounterintegerAI requests total per ai_provider in Kong
ai_llm_cost_totalcounterintegerAI requests cost per ai_provider/cache in Kong
ai_llm_provider_latency_ms_buckethistogrammsAI latencies per ai_provider in Kong
ai_llm_tokens_totalcounterintegerAI tokens total per ai_provider/cache in Kong
ai_cache_fetch_latencyhistogrammsAI cache latencies per ai_provider/database in Kong
ai_cache_embeddings_latencyhistogrammsAI cache embedding latencies per ai_provider/database in Kong
ai_llm_provider_latencyhistogrammsAI provider latencies per ai_provider/database in Kong

Additionally, the following metrics are reported.

MetricTypeUnitDescription
gen_ai.client.generation.choicescounternoneThe number of choices returned by chat completions call.
gen_ai.client.operation.durationhistogramsThe GenAI operation duration.
gen_ai.client.token.usagehistogramnoneThe number of input and output tokens used.
llm.openai.embeddings.vector_sizecounternoneThe size of returned vector.