Features
- Tracing
- Evaluation
- Prompt Engineering
- Datasets & Experiments
Tracing in Phoenix
Quick Starts
Running Phoenix for the first time? Select a quick start below.Send Traces From Your App
See what’s happening inside your LLM application with distributed tracing
Measure Performance with Evaluations
Measure quality with LLM-as-a-judge and custom evaluators
Iterate on Your Prompts
Experiment with prompts, compare models, and version your work
Optimize Your App with Experiments
Test your application systematically and track performance over time

