OpenInference OTEL Tracing
This documentation provides a guide on using OpenInference OTEL tracing decorators and methods for instrumenting functions, chains, agents, and tools using OpenTelemetry. These tools can be combined with, or used in place of, OpenTelemetry instrumentation code. They are designed to simplify the instrumentation process.Installation
Ensure you have Phoenix OTEL installed:- Python
- TypeScript
Version
arize-phoenix-otel>=0.16.0 is required for the examples on this page. Starting in 0.16.0, phoenix.otel re-exports the OpenInference context managers (using_session, using_user, using_metadata, using_tags, using_attributes, using_prompt_template, suppress_tracing) and semantic conventions (SpanAttributes, OpenInferenceSpanKindValues, OpenInferenceMimeTypeValues) directly — no need to also install openinference-instrumentation or openinference-semantic-conventions.On older versions, import the same symbols from openinference.instrumentation and openinference.semconv.trace instead.Setting Up Tracing
- Python
- TypeScript
Using Helpers
Your tracer object can now be used in two primary ways:1. Tracing a function
- Python
- TypeScript
my_func’s parameters and return. The status attribute will also be set automatically.
2. As a with clause to trace specific code blocks
- Python
- TypeScript
OpenInference Span Kinds
OpenInference Span Kinds denote the possible types of spans you might capture, and will be rendered different in the Phoenix UI. Theopeninference.span.kind attribute is required for all OpenInference spans and identifies the type of operation being traced. The span kind provides a hint to the tracing backend as to how the trace should be assembled. Valid values include:
| Span Kind | Description |
|---|---|
| LLM | A span that represents a call to a Large Language Model (LLM). For example, an LLM span could be used to represent a call to OpenAI or Llama for chat completions or text generation. |
| EMBEDDING | A span that represents a call to an LLM or embedding service for generating embeddings. For example, an Embedding span could be used to represent a call to OpenAI to get an ada embedding for retrieval. |
| CHAIN | A span that represents a starting point or a link between different LLM application steps. For example, a Chain span could be used to represent the beginning of a request to an LLM application or the glue code that passes context from a retriever to an LLM call. |
| RETRIEVER | A span that represents a data retrieval step. For example, a Retriever span could be used to represent a call to a vector store or a database to fetch documents or information. |
| RERANKER | A span that represents the reranking of a set of input documents. For example, a cross-encoder may be used to compute the input documents’ relevance scores with respect to a user query, and the top K documents with the highest scores are then returned by the Reranker. |
| TOOL | A span that represents a call to an external tool such as a calculator, weather API, or any function execution that is invoked by an LLM or agent. |
| AGENT | A span that encompasses calls to LLMs and Tools. An agent describes a reasoning block that acts on tools using the guidance of an LLM. |
| GUARDRAIL | A span that represents calls to a component to protect against jailbreak user input prompts by taking action to modify or reject an LLM’s response if it contains undesirable content. For example, a Guardrail span could involve checking if an LLM’s output response contains inappropriate language, via a custom or external guardrail library, and then amending the LLM response to remove references to the inappropriate language. |
| EVALUATOR | A span that represents a call to a function or process performing an evaluation of the language model’s outputs. Examples include assessing the relevance, correctness, or helpfulness of the language model’s answers. |
Chains
Agents
Tools
LLMs
Like other span kinds, LLM spans can be instrumented either via a context manager or via a decorator pattern. It’s also possible to directly patch client methods. While this guide uses the OpenAI Python client for illustration, in practice, you should use the OpenInference auto-instrumentors for OpenAI whenever possible and resort to manual instrumentation for LLM spans only as a last resort. To run the snippets in this section, set yourOPENAI_API_KEY environment variable.
- Python
- TypeScript
Context Manager
Decorator
Method Patch
It’s also possible to directly patch methods on a client. This is useful if you want to transparently use the client in your application with instrumentation logic localized in one place.phoenix.otel — install openinference-instrumentation to access them:get_llm_attributesget_input_attributesget_output_attributes
Context Manager
When using a context manager to create LLM spans, these functions can be used to wrangle inputs and outputs.Decorator
When using thetracer.llm decorator, these functions are passed via the process_input and process_output parameters and should satisfy the following:- The input signature of
process_inputshould exactly match the input signature of the decorated function. - The input signature of
process_outputhas a single argument, the output of the decorated function. This argument accepts the returned value when the decorated function is a sync or async function, or a list of yielded values when the decorated function is a sync or async generator function. - Both
process_inputandprocess_outputshould output a dictionary mapping attribute names to values.
process_output should accept a single argument, a list of the values yielded by the decorated function.Method Patch
As before, it’s possible to directly patch the method on the client. Just ensure that the input signatures ofprocess_input and the patched method match.Additional Features
The OpenInference Tracer shown above respects context Managers for Suppressing Tracing & Adding MetadataSuppress Tracing
- Python
- TypeScript
Using Context Attributes
- Python
- TypeScript
Adding Images to your Traces
OpenInference includes message types that can be useful in composing text and image or other file inputs and outputs:- Python
- TypeScript

