β¨ LearnMate is a multi-agent system that can spin up comprehensive, visually structured wikis on any topic in minutes. It brings together a team of specialized AI agents that research, plan, write, and add diagrams, so the final output feels like a clean wiki page built just for you. With two-phase research, parallel content and visual generation, and built-in checkpointing so you can pick up right where you left off, LearnMate is designed to be both powerful and practical. This publication explains the motivation, design, and methodology behind LearnMate, along with setup details, evaluation results, and future scope.
π Pulling together knowledge from scratch is tedious: you hop between docs, blogs, and videos, and still end up with something unstructured. Even when resources are solid, they often lack diagrams or clear flow. Meanwhile, single-agent AI tools can draft text but fall short on structured outputs and visuals.
π LearnMate addresses this challenge by orchestrating multiple agents with distinct rolesβresearch, planning, writing, and visual design. The result is a well-organized wiki that combines textual depth with diagrams, images and tables. By the end of this publication, readers will understand how LearnMate is built, how to set it up and use it, and how its design decisions enable modularity, resilience, and speed.
Figure 1: Streamlit interface with topic input, wiki viewer, and state upload options.
Figure 2: The generated wiki showing an image pulled from the web by the Design agent.
Figure 3: The generated wiki showing a table and mermaid diagram generated by the Design agent.
π― This publication provides a comprehensive overview of LearnMateβs design and operation. Key objectives:
π LearnMate is designed for:
It is particularly useful for:
π‘ Tip: Users can upload a saved state file in the Streamlit UI to resume from a specific stepβor even edit the state file to retry from that node.
Prerequisites: Python 3.11+, an OpenAI or Groq API key, and basic familiarity with CLI or Streamlit.
π¦ Set up dependencies (choose one):
# Recommended: using uv pip install uv # if uv is not installed uv venv source .venv/bin/activate # MacOS/Linux # or .venv\Scripts\activate # Windows uv sync # Alternative: using pip python -m venv .venv source .venv/bin/activate # MacOS/Linux # or .venv\Scripts\activate # Windows pip install -r requirements.txt
π¬ Run the Streamlit UI:
streamlit run frontend/app.py
This launches a browser interface where you can enter a topic, generate a wiki, and manage saved states.
π Resume from a saved state (optional):
The pipeline saves state after each successful node. If a run is interrupted, you can resume from the last checkpoint by uploading the saved state file from the sidebar.
The saved state file will be placed at outputs/Your_Topic/saved_wiki_state.json
.
You can also manually edit the state file to modify or retry specific steps.
Configuration is managed in config/config.yaml
:
research_model_llm
: model for the initial research phase.planner_model_llm
: model for planning the content structure.content_writer_model_llm
: model for generating wiki text.design_coder_model_llm
: model for generating diagrams/tables.mermaid_api_base_url
: rendering service URL (default https://mermaid.ink
, can be changed to a local server like http://localhost:3000
).reasoning_strategies
: named strategies (e.g., CoT, ReAct, Self-Ask) referenced in prompts.UI theme customization via .streamlit/config.toml
:
primaryColor
, backgroundColor
, secondaryBackgroundColor
, textColor
, font
.Environment variables (.env
or system):
TAVILY_API_KEY
(required)OPENAI_API_KEY
or GROQ_API_KEY
(at least one is required)π Existing projects like LangChain Agents, AutoGPT, and BabyAGI highlight the potential of LLM-powered workflows. However, they often:
LearnMate bridges these gaps by combining structured orchestration, visual generation, and robust state management.
Figure 4: Multi-Agent Architecture Flowchart
User Input β Research β Planning β (Content Writing || Visual Design) β Merging β Final Wiki
π A flowchart (available in the repository) illustrates how research feeds into planning, which then branches into parallel writing and design before merging into the final output.
--state
) or Streamlit UI.β‘ Writer and designer agents run in parallel, significantly reducing total runtime.
A multi-agent setup was chosen for modularity and resilience. A monolithic pipeline was tested but proved harder to extend and debug.
config.yaml
.uv
or pip for dependencies.β³ Compared to a single-agent baseline, LearnMate outputs were more structured, easier to navigate, and visually informative.
π LearnMate demonstrates how multi-agent AI can transform a simple topic request into a polished wiki complete with structured text and diagrams. Its modular design, checkpointing, and parallelism provide a solid blueprint for future agentic workflows in knowledge-intensive domains.