HomePublicationsProgramsCompetitionsContributors
Start publication
HomePublicationsProgramsCompetitionsContributors

Table of contents

Code

Datasets

Files

AboutDocsPrivacyCopyrightContactSupport
© Ready Tensor, Inc.
Back to publications
Nov 17, 2025●17 reads
CertifiedCertifiedunder the Multi-Agent System module in the Mastering AI Agents program.

Architecting Multi Agent Systems With Groq and CrewAI

  • AAIDC2025
  • Agentic Systems
  • AI
  • crewai
  • Groq
  • LLM Ochrestation
  • Multi-agent systems
  • favedev
    Favour Anwara
LikeBookmark

Table of contents

images (3).png

In the last few weeks, I have been exploring what I believe is the next natural evolution of artificial intelligence: AI systems that collaborate, rather than single models that merely respond.

Most AI tools today behave like advanced calculators; you ask a question, and a model returns an answer. While useful, this interaction model underutilizes the real potential of modern large language models. Real-world knowledge work is collaborative by nature: researchers gather information, writers synthesize ideas, and editors refine and validate outputs.

This project was built around that principle.

The result is a multi-agent research and writing system that behaves less like a chatbot and more like a coordinated AI team. Each agent has a defined role, shared context, and a structured workflow, allowing the system to research a topic, draft a long-form technical article, and refine it into a publication-ready output in under a minute.

A full multi-agent research and writing system powered by:

  • Groq’s ultra-fast Llama-3.3-70B
  • CrewAI’s structured agent orchestration
  • DuckDuckGo’s Instant Answer API for real web search
  • A custom Tkinter desktop UI to make it feel like real software

The result is a system that feels less like “AI responding to me” and more like a small team of AI colleagues (researcher, writer, and reviewer), working together to produce a polished research article in under a minute.

Let me break down how it works.

1. The Idea: AI That Works Like a Real Team

Instead of one model doing everything, I created three specialized agents, each with a specific responsibility:

The Researcher

  • Uses a custom DuckDuckGo search tool
  • Summarizes real-time AI or tech topics
  • Runs within rate limits using a tuned Groq LLM instance

The Writer

  • Takes the researcher’s findings
  • Turns them into a structured, technical 500+ word article
  • Writes clean, readable, industry-grade content

The Reviewer

  • Checks clarity, factual accuracy, structure
  • Polishes the output into a publication-ready article
  • Ensures the final result reads like something written by a senior editor

Each one is powered by Groq’s ultra-fast inference, meaning tasks execute fast without sacrificing depth.

CrewAI connects them in a sequential workflow, just like a production content team.

2. The Tech: What’s Under the Hood

The system uses:

Groq (via Python SDK)

  • Model: llama-3.3-70b-versatile
  • Manual rate-limit handling (with retries + backoff)
  • Controlled token usage to avoid unnecessary costs

CrewAI

  • Agent roles
  • Task chaining
  • Context passing (task1 → task2 → task3)
  • Process sequencing

Custom Tools

I wrote a custom search tool using DuckDuckGo’s Instant Answer API:

class WebSearchTool(BaseTool): def _run(self, query: str): ...

This allows the researcher to pull fresh information, not hallucinate outdated facts.

Tkinter App

I wrapped everything in a desktop GUI so anyone can:

  • Enter a topic
  • Click “Run Agents”
  • Watch the output stream like logs in a console

It feels like a true mini-product which is simple, clean, and powerful.

3. The Experience: What It Actually Does

When you enter a topic (e.g., “AI in Healthcare”), the system:

  1. Asks Groq to generate a quick 300-word summary
  2. Updates agent tasks dynamically based on your topic
  3. Kicks off the CrewAI workflow
  4. The agents pass work to each other automatically
  5. A final polished article appears in the output window

System Architecture and Methodology

Untitled-2025-10-28-2258.png
The system is designed as a sequential agent pipeline, orchestrated using CrewAI. Instead of assigning all responsibilities to a single model, intelligence is decomposed into specialized agents, each optimized for a specific task.

The workflow progresses through three main stages:

  1. Research
  2. Writing
  3. Review
    4.Final Output

Each stage passes structured context to the next, ensuring continuity and coherence.

Agent Design and Responsibilities

Research Agent

The Research Agent is responsible for gathering and summarizing real-time information using DuckDuckGo’s Instant Answer API. It focuses on breadth and relevance rather than prose quality.

research_agent = Agent( role="Researcher", goal="Research the given topic and summarize key technical insights", backstory="An expert AI research assistant", tools=[duckduckgo_search], llm=groq_llm, verbose=True, )

This agent runs on Groq’s Llama-3.3-70B model, selected for its ultra-fast inference speed. Manual rate limiting and token constraints are applied to ensure cost control and system stability.

Writer Agent

The Writer Agent converts raw research into a structured, long-form technical article. It focuses on clarity, logical flow, and industry-grade tone.

writer_agent = Agent( role="Writer", goal="Write a detailed, well-structured technical article from research input", backstory="A senior technical writer", llm=groq_llm, verbose=True, )

The writer does not perform external searches. Instead, it relies entirely on the context produced by the Research Agent, reinforcing separation of concerns and reproducibility.

Reviewer Agent

The Reviewer Agent acts as an editor, refining the article for clarity, factual consistency, and publication quality.

reviewer_agent = Agent( role="Reviewer", goal="Review and polish the article for accuracy, structure, and clarity", backstory="An experienced technical editor", llm=groq_llm, verbose=True, )

This agent ensures the final output reads like a professionally edited article rather than a raw model response.

Why This Is Exciting

Because this is not “just another chatbot.”

This is an early look at how AI teams will replace dozens of today’s repetitive workflows:

  1. Research assistants
  2. Content teams
  3. Data analysts
  4. Technical documentation units
  5. Marketing research
  6. Reporting frameworks
  7. Policy brief generation

The power isn’t in the models; it’s in the coordination, the ability to structure intelligence.

What’s Next

This project is just the beginning. I’m already planning to:

  • Add memory so agents learn from previous tasks
  • Add web browsing tools for deeper research
  • Integrate voice input/output
  • Export finished articles as PDF or HTML
  • Deploy this as a web app with FastAPI

The vision?
A personal AI research team you can call on anytime.

Final Thoughts

This multi-agent system taught me something important:

AI becomes exponentially more powerful when you stop treating it like a single model and start treating it like a coordinated workforce.

If you’ve been curious about agents, orchestration frameworks, or building practical AI tools, this is one of the best places to start.

And honestly?
It’s insanely fun watching your own AI “team” work together right on your screen.

Table of contents

Your publication could be next!

Join us today and publish for free

Sign Up for free!

Table of contents

Code

  • AAIDC module 2.git

Code

  • AAIDC module 2.git