Skip to main content
In this guide, we’ll get tracing set up in Phoenix Cloud and walk through how to instrument an application. We’ll start by setting up a Phoenix Cloud instance, create a simple agent, and then send a single trace so we can see everything end to end. We’ll use the CrewAI framework in Python, but Phoenix works with many agent frameworks and orchestration libraries. You can find the full list of supported frameworks on our Integrations Page. 

Before We Start

To follow along, you’ll need an OpenAI API key & a Serper Dev Key. We’ll be using OpenAI as our LLM provider & Serper as our Web Search Tool for our chatbot.

Step 1: Set Up Phoenix Cloud

Before we can send traces anywhere, we need Phoenix running. In this step, we’ll create a Phoenix Cloud account and configure it for our application. If you’d rather run Phoenix locally, you can follow the local setup guide instead.

Create a Phoenix Cloud Account

  1. Make a free Phoenix Cloud account.
  2. From the dashboard, click Create a Space in the upper-right corner.
  3. Enter a name for your new space.
  4. Once the space is created, launch your Phoenix instance directly from the dashboard.
  5. Create and save an API key. We’ll use this in the next step.
  6. Note your Hostname — this is the endpoint we’ll configure in code shortly.

How to Create a Phoenix Cloud account & a space

How to Make an API Key

Step 2: Configure your Environment

Now that Phoenix is running, we need to connect our application to it so we can start sending traces. In this step, we’ll install the required dependencies and configure a few environment variables. This setup is what allows Phoenix to receive trace data from our application. Once it’s in place, running the application will automatically create a project in the Phoenix UI and record each traced run there. We’ll now install both the CrewAI package and the OpenInference CrewAI auto-instrumentation package, which handles tracing for us without requiring manual instrumentation.

Install Your Packages

%pip install -qqqqq arize-phoenix crewai crewai-tools openinference-instrumentation-crewai openai

Set Your API Keys

import os

os.environ["PHOENIX_API_KEY"] = <ENTER YOUR PHOENIX API KEY>
os.environ["PHOENIX_COLLECTOR_ENDPOINT"] = <ENTER YOUR PHOENIX ENDPOINT>
os.environ["SERPER_API_KEY"] = <ENTER YOUR SERPER API KEY>
os.environ["OPENAI_API_KEY"] = <ENTER YOUR OPENAI API KEY>

Register Your Project in Phoenix 

Next, we’ll register a tracer provider linked to a project in Phoenix. This project is where your traces will show up in the UI.
from phoenix.otel import register

tracer_provider = register(project_name="crewai-tracing-quickstart", auto_instrument=True)
At this point, your application is configured to send traces to Phoenix!

Step 3: Create your Agent

Now that Phoenix is running and our environment is configured, we can start building the application so we can generate real execution and send traces to Phoenix. In this step, we’ll create a simple Financial Analysis and Research chatbot. This tutorial we will use CrewAI, but you can build agents in any of these different frameworks for auto-integration with Phoenix. This agent is made up of:
  • Two sub-agents: a Research agent and a Writer agent
  • Two tasks: one for financial research and one for generating a summary report
  • One tool: SerperDevTool for real-time web search

Define the Agents

We’ll start by defining the two agents that make up our crew & the tool the agents may use.
from crewai import Agent, Crew, Process, Task
from crewai_tools import SerperDevTool

search_tool = SerperDevTool()

researcher = Agent(
    role="Financial Research Analyst",
    goal="Gather up-to-date financial data, trends, and news for the target companies or markets",
    backstory="""
        You are a Senior Financial Research Analyst.
    """,
    verbose=True,
    allow_delegation=False,
    max_iter=2,
    tools=[search_tool],
)

writer = Agent(
    role="Financial Report Writer",
    goal="Compile and summarize financial research into clear, actionable insights",
    backstory="""
        You are an experienced financial content writer.
    """,
    verbose=True,
    allow_delegation=True,
    max_iter=1
)

Define the Tasks & Tool

Next, we’ll define the tasks each agent is responsible for.
task1 = Task(
    description="""
        Research: {tickers}
        Focus on: {focus}
    """,
    expected_output="Detailed financial research summary with web search findings",
    agent=researcher,
)

task2 = Task(
    description="Write a report based on the research above.",
    expected_output="A polished financial analysis report",
    agent=writer,
)

Create and Run the Crew

Finally, we’ll wire the agents and tasks together and run them sequentially.
crew = Crew(
    agents=[researcher, writer],
    tasks=[task1, task2],
    verbose=1,
    process=Process.sequential,
)
At this point, we have a working CrewAI setup with multiple agents, tasks, and a tool. In the next step, we’ll run the crew and see how its execution shows up as a trace in Phoenix!

Step 4: Look at the Trace in Phoenix

Now that we’ve defined our chatbot, all that’s left to do is run it and see what Phoenix captures. To run the agent, execute the following:
user_inputs = {
    "tickers": "TSLA",
    "focus": "financial analysis and market outlook"
}

result = crew.kickoff(inputs=user_inputs)
Once the run completes, head back to Phoenix and navigate to the Traces view. You should see a new trace corresponding to this run. Click into it to explore how the agents and tasks are executed. At this point, you can follow the full execution of the chatbot as a single trace in Phoenix. More importantly, you can now see how your application actually ran:
  • Which agents were invoked and in what order
  • How tasks flowed from one step to the next
  • Where time was spent across the workflow
This is something you couldn’t see before tracing. Instead of guessing how an agent run behaved or digging through logs, you now have a single, end-to-end view of each execution. Congratulations! You’ve sent your first trace to Phoenix.

Learn More About Traces

You’ve now sent a trace to Phoenix and seen how an agent runs shows up from start to finish. The next step you can take is to run evaluations on your application to start measuring where it is working well and where it needs some iteration to improve performance. Follow along with the Get Started guide for Evals to add even more value to setting up tracing. If you want to focus on tracing and go deeper into just looking at your traces, the Tracing Tutorial walks through how to interpret traces in more detail: including how to read spans, understand timing, and use trace data to debug and analyze your application.