Skip to main content
Phoenix home page
English
Search...
⌘K
Ask AI
TypeScript API
Python API
GitHub
Phoenix Cloud
Phoenix Cloud
Search...
Navigation
How-to: Experiments
How to: Experiments
Documentation
Self-Hosting
Phoenix Cloud
Cookbooks
Integrations
SDK & API Reference
Release Notes
Community
Blog
What is Arize Phoenix?
Quick Start
Get Started with Phoenix
End to End Features Notebook
Phoenix Demo
Tracing
Tutorial
Overview: Tracing
How-to: Tracing
Evaluation
TypeScript Quickstart
Python Quickstart
Overview: Evals
How to: Evals
Pre-Built Evals
Datasets & Experiments
Overview: Datasets & Experiments
How-to: Datasets
How-to: Experiments
Overview
Run Experiments
Using Evaluators
Repetitions
Splits
Prompt Engineering
Tutorial
Overview: Prompts
How to: Prompts
Settings
Access Control (RBAC)
API Keys
Data Retention
Concepts
User Guide
Production Guide
Environments
Tracing
Prompts
Datasets & Experiments
Evals
Resources
Frequently Asked Questions
Contribute to Phoenix
Phoenix to Arize AX Migration
Github
OpenInference
On this page
How to run experiments
How to use evaluators
How-to: Experiments
How to: Experiments
Copy page
Copy page
How to run experiments
How to upload a Dataset
How to run a custom task
How to configure evaluators
How to run the experiment
How to use repetitions in experiments
How to run an experiment over a dataset split
How to use evaluators
LLM Evaluators
Code Evaluators
Custom Evaluators
Exporting Datasets
Run Experiments
⌘I