CrewAI: A Practical Guide to Role-Based Agent Orchestration
Source: DigitalOcean
By Adrien Payong and Shaoni Mukherjee CrewAI is a lightweight, lightning-fast Python framework for orchestrating autonomous AI agents that work together as a “crew” to complete a task. CrewAI is built from the ground up, with no heavy abstractions, and is 100% independent of LangChain or any other agent libraries. CrewAI gives developers high-level simplicity and low-level control. It is optimized for production-ready multi-agent workflows that care about reliability, observability, and cost efficiency. This crash course walks you from “hello world” to a production-grade multi-agent workflow with CrewAI. You’ll be introduced to the key concepts, set up a project, code a real workflow, plug in some powerful tools, and learn best practices for reliability and monitoring. By the end of this, you’ll know how to decide if CrewAI is right for your use case, and how it compares against similar frameworks like LangGraph and AutoGen. Let’s get started! CrewAI is a role-based multi-agent framework for Python. It allows you to declare multiple LLM-powered agents with a defined expertise and purpose, and organize them to collaborate on a structured workflow. A crew’s members work semi-autonomously in their areas of specialization, and a separate coordinating process (manager agent) can also be made to manage the workflow. Organizing AI agents in a crew with clearly defined roles allows you to avoid making single agents do everything, as is the case in simpler systems. For example, one agent may excel at research, another at writing, another at making decisions. CrewAI handles the messaging and organization between agents, enabling them to collaborate on various subtasks to get a final result. Prerequisites: Python 3.10 to 3.13. CrewAI needs Python >=3.10 and <3.14. Ensure you are within this range (check via python3 --version). If not, make sure to update your Python first. Note also that CrewAI utilizes the OpenAI API under the hood, so you must have an OpenAI API key (or a key from any other LLM provider) available. Step 1: Install CrewAI. CrewAI is distributed through PyPI. The core library can be installed via pip. This installs CrewAI’s package and CLI. If you want full support for tools, you can also install the optional tools package. This ensures you have CrewAI’s library of pre-built tools. Note: CrewAI has its own environment manager called UV, which also handles dependencies for you. This is optional, but recommended by the CrewAI docs. This installs the uv command. After installation, you can check if things work by verifying the version. You should see an output like crewai v0.x.x confirming it’s installed. Step 2: Create a CrewAI project scaffold. CrewAI ships with a CLI to scaffold a new project with the recommended structure. In a terminal run: Replace my_project_name with the name you want to give your project. This will generate a folder my_project_name/ with a ready-to-use template: This scaffold gives you a structured starting point. The agents.yaml and tasks.yaml files are where you can declaratively specify your crew’s configuration (roles, goals, task sequence, etc. ), while main.py and crew.py are Python entry points in case you prefer code-based configuration. The .env is where you’ll store API keys (e.g., OPENAI_API_KEY=…) so they aren’t hardcoded. Step 3: Install project dependencies. Inside your project directory, run: This command will run the CrewAI CLI to install any dependencies the project may require (setting up the environment, making sure the crewai-tools package is installed, etc.). If your project requires additional Python packages to be installed, you can specify them using’ uv add <package>’ (UV) or via ‘pip install’ normally. Step 4: Run your first crew. Now you’re ready to execute the default example crew. Simply run: This will run the crew defined in your scaffold (by default, it may run a simple two-agent collaboration example). If everything is working, you should see the agents thinking/acting in the console. Common installation gotchas*:* It’s worth looking at the four main building blocks of CrewAI. A CrewAI workflow consisted of 4 main components: Agents, Tasks, Tools, and the Crew itself (an orchestrator that combines agents and tasks). An Agent in CrewAI is an LLM-powered worker with a designated role, a goal to achieve, and (optionally) some background context. Agents work autonomously, making decisions that are deemed capable of in their field of expertise. For instance, you might have an agent with a role of Research Analyst whose goal is to research, and another agent with a role of Writer whose goal is to provide a report. You create an agent by specifying its role, goal, and other parameters. Here’s an example: A Task in CrewAI represents a unit of work to be done by an agent, including instructions and an expected outcome. You can think of it as assigning a to-do item to one of your agents. Here’s how you define a Task in code: Key attributes of a Task: Tools are how you provide agents with additional capabilities. A Tool is essentially a function or an API integration that an agent can call as part of its reasoning process. For example, a web search tool would allow an agent to query the internet, a database tool would allow an agent to fetch data, and so on. CrewAI comes with some built-in tools and allows you to define your own custom tools. Attaching tools to an agent essentially means the agent can decide to use those tools as it deems necessary to achieve its goal. Here’s an example of using one of CrewAI’s built-in tools, a web search tool, as well as the file I/O tools: A custom tool is simply a Python function (or class) decorated with @tool from CrewAI, made available to agents: In this example, we created a tool named “Database Query.” The docstring and function name provide context for the agent to understand the tool’s function. We included a safety check for potentially harmful SQL commands. The agent could invoke query_database(“SELECT * FROM Customers”) using this tool at run time. CrewAI supports different types of tools and interactions: The crew coordinates agents with tasks and controls the execution flow. When instantiating a Crew object, you must provide the Crew with the list of agents, the list of tasks, and the configuration of how you want the process to run. For example: In this example, we are initializing a crew with 3 agents and 3 tasks. We also opted for a hierarchical process, where a Manager agent is responsible for the workflow and assigns tasks to worker agents. An input (topic: “AI Safety”) is also provided, which can be used to fill in the placeholder in task descriptions or agents’ goals. Process types: CrewAI provides multiple execution strategies via the Process setting: Under the hood, crew.kickoff() will direct the workflow based on the process type: Now that we’ve covered the basics of CrewAI’s components, let’s see an example of a multi-agent workflow you could build. Let’s say we want to build an automated workflow for research and writing content — perhaps “Research a topic and generate a summary report”. We will configure two agents: one agent performs the research (the “Researcher”) and another one uses the discovered information to write a summary (the “Writer”). This will be a simple example of a real-world use case that involves sequential collaboration, tool use (for the research), and handing off of results between agents. Step 1: Define the Agents. We need a Researcher Agent and a Writer Agent. In YAML (agents.yaml) or Python, we’d specify: Notice how we assigned the Researcher a WebSearchTool – a “dummy” tool which is an agent with the ability to perform web searches (there are tools like SerperApiTool in CrewAI that actually do Google searches through an API – we could get an API key for that and configure the tool as needed). The Writer doesn’t need a tool; it uses the underlying LLM to generate text. Both have allow_delegation: false for simplicity’s sake (we’re not going to reassign tasks to other agents in this flow). We could also specify memory: true for each agent (or just memory: true on the crew level if we want all agents to have that behavior – the crew config will propagate that to the agents) if we wanted them to retain info. Step 2: Define the Tasks and Process. We want the Researcher to run first, then the Writer. This is a sequential process: Task 1 (research) -> Task 2 (writing). In CrewAI, we can define tasks in a YAML (tasks.yaml) like: A couple of things to keep in mind: You can have placeholders in the description (e.g., {{topic}}) to fill in when we execute the crew with parameters. We annotated that the write_report task has context: [do_research], which will provide the Writer agent with the Researcher’s output as input/context automatically. This is super important — this is how data/context flows between agents. We have defined a process: sequential to make the execution order explicit. The expected_output fields are to guide the agents (also to define validations). Step 3: Write the Orchestration Code (crew.py / main.py). With YAML config, if we run crewai run, CrewAI will perform the orchestration for us. Let’s take a look at what it looks like in Python so we can better understand the internals: Now, to execute this crew, we would call crew.kickoff(topic=“Climate Change Impacts”) (for example, providing the topic parameter that our task descriptions expect). CrewAI will then: Step 4: Run and test the workflow. We would invoke crewai run --param topic=“Climate Change Impacts” (if using the CLI with YAML) or call our crew.kickoff(topic=“Climate Change Impacts”) in a Python script. Monitor the console output: Production tips for building workflows: Extending the example: You could easily expand this workflow by adding a third agent (e.g., an “Editor” agent that checks the Writer’s report for quality or style adherence. You can also add a “FactChecker” agent that verifies facts using a second web search. You’d add an additional task, create the necessary context links, and perhaps allow for the Editor agent to request revisions (delegation) if necessary. CrewAI comes with a large library of built-in tools that enable agents to interact with files, the web, databases, vector stores, and external services. Below is a table that briefly describes the main categories of tools, example tools for each, and general cases when you’d use them. CrewAI includes a broad built-in tool library that helps agents interact with files, the web, databases, vector stores, and external services. The table below summarizes the main tool categories, example tools, and typical use cases. Integrations and Advanced Tools: CrewAI also integrates with the Model Context Protocol (MCP), an entire ecosystem of community-built tools that can be accessed with a common interface. Enabling MCP integration for your agent would mean it could have access to literally thousands of tools running on MCP servers (things like complex data crunching services, specialized APIs, etc). This requires installing the crewai-tools[mcp] extra, and (optionally) running an MCP server or connecting to one. In production, you will want to account for a variety of failure modes. Agents getting into infinite loops, tools firing erroneously, cost spirals, etc. CrewAI provides patterns and built-in tools to improve reliability. Here are some important patterns to know about: This table summarizes the three most practical observability layers for CrewAI workflows. It shows how to start with simple console visibility during development, then evolve into structured tracing and external monitoring for production. It finally adds debug hooks and evaluation to continuously improve reliability and output quality over time. Different agents may have different philosophies, architectures, or features that fit your use cases. The three most popular frameworks are CrewAI, LangGraph, and AutoGen. Let’s briefly compare the three to point out key similarities and differences: CrewAI is a framework designed to orchestrate multiple AI agents that work together using clearly defined roles, goals, and responsibilities. Unlike traditional single-agent setups, CrewAI emphasizes collaboration and task delegation, similar to how human teams operate. This makes it particularly effective for complex, multi-step workflows that require planning, execution, and validation. In CrewAI, each agent is assigned a specific role, such as researcher, writer, or reviewer, along with a defined objective. Tasks are distributed based on these roles, ensuring agents focus only on what they are best suited for. This structured approach improves efficiency, reduces redundancy, and produces more coherent outputs. CrewAI is widely used for content generation, research automation, data analysis pipelines, and AI-driven product workflows. It is especially useful in scenarios where tasks must be completed sequentially or collaboratively. Examples include building RAG systems, automating reports, and coordinating multiple LLM-powered tools. Basic familiarity with Python and large language models is helpful but not mandatory. CrewAI is designed to be developer-friendly, with clear abstractions for agents, tasks, and workflows. Beginners can start with simple examples and gradually move to more complex multi-agent orchestration patterns. Yes, CrewAI can be integrated with popular LLM providers, APIs, and external tools such as vector databases and workflow engines. This flexibility allows it to fit seamlessly into existing AI pipelines. As a result, teams can enhance their current systems without rewriting everything from scratch. CrewAI is most useful when you treat a “crew” as an engineered workflow, rather than a chat experiment. Assign roles and task boundaries, ground key steps with tools, and enforce limits on runaway behavior so the system cannot degenerate into loops, failures, or surprise costs. Then instrument everything—logs, traces, and lightweight evaluations—so you can quickly debug issues and drive up quality release after release. If you need predictable multi-step automation with clear ownership and production controls, CrewAI is a strong default. However, if your problem is primarily complex branching logic or open-ended conversational collaboration, LangGraph or AutoGen may be a better fit. Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases. Learn more about our products I am a skilled AI consultant and technical writer with over four years of experience. I have a master’s degree in AI and have written innovative articles that provide developers and researchers with actionable insights. As a thought leader, I specialize in simplifying complex AI concepts through practical content, positioning myself as a trusted voice in the tech community. With a strong background in data science and over six years of experience, I am passionate about creating in-depth content on technologies. Currently focused on AI, machine learning, and GPU computing, working on topics ranging from deep learning frameworks to optimizing GPU-based workloads. This textbox defaults to using Markdown to format your answer. You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link! Please complete your information! Join the many businesses that use DigitalOcean’s Gradient AI Agentic Cloud to accelerate growth.
Reach out to our team for assistance with GPU Droplets, 1-click LLM models, AI agents, and bare metal GPUs. Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation. Full documentation for every DigitalOcean product. The Wave has everything you need to know about building a business, from raising funding to marketing your product. Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter. New accounts only. By submitting your email you agree to our Privacy Policy Scale up as you grow — whether you're running one virtual machine or ten thousand. Sign up and get $200 in credit for your first 60 days with DigitalOcean.* *This promotional offer applies to new accounts only.