Python Configuration
The Phoenix evals Python package uses an adapter pattern to wrap underlying client SDKs and provide a unified interface. Each adapter forwards parameters directly to the underlying client, so you can use the same configuration options as the native SDK.- Client configuration parameters (e.g.,
api_key,base_url,api_version) are passed as**kwargswhen creating theLLMinstance. These configure the client itself. - Model invocation parameters (e.g.,
temperature,max_tokens,top_p) are passed as**kwargswhen creating an evaluator. These control how the model generates responses.
LLM, specify:
provider: The provider name (e.g.,"openai","azure","anthropic")model: The model identifierclient(optional): Which client SDK to use if multiple are installed (e.g.,"openai","langchain","litellm")sync_client_kwargs(optional): Client configuration forwarded only to the sync clientasync_client_kwargs(optional): Client configuration forwarded only to the async client**kwargs: Client configuration parameters forwarded to both sync and async client constructors.
show_provider_availability function:
provider column shows the supported providers, and the status column will read “Available” if the required dependencies are installed in the active Python environment. Note that multiple client SDKs can be used to make LLM requests to a provider; the desired client SDK can be specified when constructing the LLM wrapper client.
OpenAI Adapter
Client:openai.OpenAI() or openai.AsyncOpenAI()Invocation:
client.chat.completions.create()Docs: OpenAI Python Client
Azure OpenAI Adapter
Client:openai.AzureOpenAI() or openai.AsyncAzureOpenAI()Invocation:
client.chat.completions.create()Docs: Azure OpenAI Python SDK
Note: The
model parameter should be your Azure deployment name.
LiteLLM Adapter
Client: Lightweight wrapper (no traditional client object)Invocation:
litellm.completion() or litellm.acompletion()Docs: LiteLLM Documentation
Note: Model names must use provider route format:
{provider}/{model} (e.g., "x-ai/grok-2").
LangChain Adapter
Client: LangChain chat model classes (e.g.,langchain_openai.ChatOpenAI, langchain_anthropic.ChatAnthropic)Invocation:
client.invoke() or client.predict()Docs: LangChain OpenAI, LangChain Anthropic
Anthropic Adapter
Client:anthropic.Anthropic() or anthropic.AsyncAnthropic()Invocation:
client.messages.create()Docs: Anthropic Python SDK
Note:
max_tokens is required and defaults to 4096 if not specified when creating the evaluator.
Google GenAI Adapter
Client:google.genai.Client()Invocation:
client.models.generate_content()Docs: Google GenAI Python SDK
Separate Sync/Async Client Configuration
Some providers (OpenAI, Anthropic) create separate sync and async SDK clients internally. Thesync_client_kwargs and async_client_kwargs parameters allow passing configuration that applies only to one client type, useful for:
- Different timeouts: Longer timeouts for async batch operations
- Different HTTP clients: Custom httpx clients for sync vs async
- Different retry configurations: More aggressive retries for batch async calls
TypeScript Configuration
The TypeScript evaluation library uses the AI SDK’sLanguageModel type for model abstraction. Models are created using AI SDK provider functions and passed directly to evaluators.
Installation
Configuring Model Providers
Import and configure your model provider, then pass it to evaluators:OPENAI_API_KEY, ANTHROPIC_API_KEY) or you can pass configuration directly:
Using with LLM Evaluators
Invocation Parameters
Model invocation parameters (liketemperature, maxTokens, etc.) are passed through to the underlying AI SDK generateObject call. However, the current TypeScript type definitions don’t explicitly include these parameters in CreateClassifierArgs or CreateClassificationEvaluatorArgs, so TypeScript will show type errors if you try to pass them directly.
Note: Invocation parameters work at runtime (they are captured via the ...rest spread in createClassifierFn and passed through to generateObject), but TypeScript will show type errors at compile time. To use invocation parameters, you’ll need to use type assertions (as shown in the example below) since the AI SDK does not support setting default invocation parameters at the model level.

