For my Module 2 submission, I built Deep Research Crewโa fully autonomous multi-agent system capable of conducting in-depth web research, analyzing complex topics, and authoring professional reports.
The unique twist? It runs 100% locally.
Using Ollama to serve a quantized 3B parameter model (Qwen 2.5 3B) and CrewAI for orchestration, this project demonstrates that you don't need massive cloud GPUs to build effective agentic workflows.
The objective was to design a modular team of AI agents that could:
The system utilizes a sequential chain-of-thought pipeline involving four specialized agents:
graph TD User[User Input: "Future of Solid State Batteries"] --> Researcher Researcher(๐ต๏ธโโ๏ธ Senior Researcher) -->|Raw Data| Analyst(๐ง Content Strategist) Analyst -->|Structured Outline| Writer(โ๏ธ Lead Writer) Writer -->|Final Markdown| Publisher(๐พ File Utility) Publisher --> File[output/report.md]
DuckDuckGoSearchTool, this agent scours the web for recent developments, expert opinions, and raw data.SafeFileWriteTool.I implemented a Sandbox mechanism to prevent the agents from modifying system files. The SafeFileWriteTool restricts file operations strictly to the project's output/ directory.
class SafeFileWriteTool(BaseTool): name: str = "Safe File Writer" description: str = "Write content to a file safely." def _run(self, filename: str, content: str) -> str: # ... existing code ... # --- FIXED SANDBOX LOGIC --- cwd = os.getcwd() output_dir = os.path.join(cwd, "output") # Security Check: Ensure the path is inside the output folder if not file_path.startswith(output_dir): return f"Error: Access denied." # ... write operation ...
One of the biggest challenges was "hallucination" by the small 3B model. The Publisher agent initially tried to "talk" about saving the file rather than actually saving it.
I fixed this by implementing Strict Negative Constraints in the agent definition:
# src/agents.py publisher = Agent( role="File Publishing Utility", goal="Save the exact text provided by the writer to the specified filename.", backstory=( "You are a silent utility. You do not think, you do not summarize. " "You take the text given to you and you call the 'Safe File Writer' tool to save it. " "**Your only job is to use the tool with the filename provided in the task.**" ), tools=[write_tool], llm=local_llm, verbose=True, allow_delegation=False )
Initially, I attempted to build a Coding Crew (Planner -> Developer -> Reviewer) to write C and Java programs.
The Problem: The 3B parameter model (Qwen 2.5) struggled with strict syntax logic. It would often write code but forget to save it, or mix Python syntax into C code.
The Solution: I pivoted to a Research Crew. Small LLMs excel at language understanding, summarization, and formatting. By shifting the domain from Code Generation to Content Generation, the system became highly stable and effective, producing professional-grade Markdown reports.
The system successfully researched complex topics like "The Analysis of AI Agents in 2025" and "Solid State Battery Developments."
# The Promise and Challenges of Solid-State Batteries
II. Key Themes