Skip to main content
Phoenix helps you understand and improve AI applications by giving you a workflow for debugging and iteration. You can send detailed logging information, known as traces, from your app to see exactly what happened during a run, score outputs using evaluation tests to identify failures and regressions, iterate on your prompts using real production examples, and optimize your app with experiments that compare changes on the same inputs. Together, these tools help you move from inspecting individual runs to improving quality with evidence. Phoenix is built by Arize AI and the open-source community. It is built on top of OpenTelemetry and is powered by OpenInference instrumentation. See Integrations for details.

Features

Tracing in Phoenix

Tracing lets you see what happened during a single run of your AI application, step by step. A trace captures model calls, retrieval, tool use, and custom logic so you can debug behavior and understand where time is spent.Phoenix accepts traces over OpenTelemetry (OTLP) and provides auto-instrumentation for popular frameworks (LlamaIndex, LangChain, DSPy, Mastra, Vercel AI SDK), providers (OpenAI, Bedrock, Anthropic), and languages (Python, TypeScript, Java).

Quick Starts

Running Phoenix for the first time? Select a quick start below.

Next Steps

The best next step is to start using Phoenix. Start with a quickstart to send data into Phoenix, then build from there. See the Quickstart Overview for more information about what you’ll build.

Other Resources