Skip to content

Building an Agent with CrewAI

CrewAI is a powerful Python framework for orchestrating autonomous AI agents. It helps you build sophisticated multi-agent systems that can collaborate to accomplish complex tasks. Because CrewAI is built on top of LangChain, it seamlessly integrates with any OpenAI-compatible API, including Voidon.

In this tutorial, we will create a simple research "crew" consisting of two agents: 1. A Researcher Agent that uses a search tool to find information. 2. A Writer Agent that takes the researcher's findings and writes a brief report.

Why use a framework like CrewAI?

Instead of manually managing the conversation history and the tool-calling loop, CrewAI abstracts this complexity away. You simply define the agents, their tools, and their tasks, and the framework orchestrates the entire workflow.


Prerequisites

You will need to install the crewai and duckduckgo-search libraries. We use DuckDuckGo here as a simple, free tool for web searches.

Bash
pip install crewai 'crewai[tools]'

Implementation

Step 1: Set Up the LLM

First, we need to configure CrewAI to use the Voidon API endpoint. We do this by instantiating a ChatOpenAI object and passing our base_url and api_key.

Python
import os
from crewai import Agent, Task, Crew, Process
from langchain_openai import ChatOpenAI
from crewai_tools import SerperDevTool

# Set your Voidon API Key
# It is recommended to set this as an environment variable
os.environ["VOIDON_API_KEY"] = "your-voidon-api-key"
os.environ["SERPER_API_KEY"] = "Your Serper API Key" # serper.dev API key

# Configure the LLM to use the Voidon endpoint
voidon_llm = ChatOpenAI(
    model="auto",
    base_url="https://api.voidon.astramind.ai/v1",
    api_key=os.environ.get("VOIDON_API_KEY")
)

Step 2: Define Tools

Agents need tools to interact with the outside world. Here, we'll give our researcher a search tool.

Python
# Initialize the search tool
search_tool = SerperDevTool()

Step 3: Create the Agents

Now, we define our agents. Each agent has a role, a goal, a backstory, and the tools it can access. We must also assign the LLM instance to each agent.

Python
# Create a researcher agent
researcher = Agent(
  role='Senior Research Analyst',
  goal='Uncover cutting-edge developments in AI and data science',
  backstory="""You work at a leading tech think tank.
  Your expertise lies in identifying emerging trends.
  You have a knack for dissecting complex data and presenting
  actionable insights.""",
  verbose=True,
  allow_delegation=False,
  tools=[search_tool],
  llm=voidon_llm
)

# Create a writer agent
writer = Agent(
  role='Tech Content Strategist',
  goal='Craft compelling content on tech advancements',
  backstory="""You are a renowned Content Strategist, known for
  your insightful and engaging articles.
  You transform complex concepts into compelling narratives.""",
  verbose=True,
  allow_delegation=True,
  llm=voidon_llm
)

Step 4: Define the Tasks

Tasks are the specific assignments for each agent. We can create a chain of tasks where the output of one becomes the context for the next.

Python
# Create tasks for the agents
research_task = Task(
  description="""Conduct a comprehensive analysis of the latest advancements in AI in 2024.
  Identify key trends, breakthrough technologies, and potential industry impacts.
  Your final answer MUST be a full analysis report.""",
  expected_output='A comprehensive 3-paragraph summary of the latest AI advancements.',
  agent=researcher
)

write_task = Task(
  description="""Using the research analyst's report, develop an engaging blog post
  that highlights the most significant AI advancements.
  Your post should be informative yet accessible, catering to a tech-savvy audience.
  Make it sound cool, avoid complex words so it doesn't sound like AI.""",
  expected_output='A 4-paragraph blog post on the latest AI advancements.',
  agent=writer
)

Step 5: Assemble and Launch the Crew

Finally, we assemble our agents into a Crew and define the process (e.g., sequential). Then, we kick it off!

Python
# Instantiate your crew with a sequential process
crew = Crew(
  agents=[researcher, writer],
  tasks=[research_task, write_task],
  process=Process.sequential,
  verbose=2 # You can set it to 1 or 2 for different logging levels
)

# Get your crew to work!
result = crew.kickoff()

print("######################")
print("Crew Final Result:")
print(result)

What's Happening Under the Hood?

  1. The Crew starts with research_task.
  2. The researcher agent, powered by the Voidon LLM, determines it needs to use the search_tool to accomplish its goal.
  3. CrewAI executes the search tool with the arguments the agent provided.
  4. The results are passed back to the researcher, who then formulates the final analysis.
  5. The output of research_task is passed to write_task.
  6. The writer agent uses this context to write the blog post.
  7. The final output of the last task is returned.