EduSync Agents β Multi-Agent AI Study Buddy
A modular, LangChain + LangGraph multi-agent system that researches a topic, generates quizzes, and explains answers, with Streamlit UI and LangSmith tracing.
Problem Statement
- Many students struggle to find concise explanations, relevant practice questions, and clear reasoning for topicsβall in one place.
- Existing tools can be overwhelming or lack interactive, testable learning loops.
- There is a need for a cohesive pipeline from research β quiz β explanation, with transparent orchestration and observability.
Solution Overview
- Multi-agent architecture with three specialized agents:
- Research Agent: Produces clear, concise explanations and practical examples.
- Quiz Agent: Generates relevant quiz questions with answers.
- Explainer Agent: Provides step-by-step clarifications for each answer.
- LangGraph orchestration with in-memory checkpointing and thread ID support.
- Streamlit UI for a modern, interactive front-end.
- LangSmith integration for tracing, debugging, and observability.
Key Features
- Research β Quiz β Explain pipeline with modular agents.
- Wikipedia and Math tool integration.
- Groq LLM support, model settable via
.env
.
- Streamlit UI with customizable theme.
- Deterministic fallbacksβnever display empty outputs.
- LangSmith tracing and experiment tracking.
Architecture
- LLM: Groq (configurable via
GROQ_MODEL
).
- Orchestration: LangGraph + MemorySaver checkpointing.
- Agent Framework: LangChain.
- UI: Streamlit.
- Tracing & Observability: LangSmith.
- Utilities: Wikipedia API, basic math evaluation.
Components
Agents
Research Agent
- Input: topic (+ Wikipedia summary when available)
- Output: concise explanation (80β150 words) + 2 examples
- LLM settings: Groq, temperature 0.2
Quiz Agent
- Input: research summary
- Output: 5 questions with answers in βQ: β¦ A: β¦β format
- LLM settings: Groq, temperature 0.4
Explainer Agent
- Input: each Q/A line
- Output: step-by-step explanation
- LLM settings: Groq, temperature 0.3
- Wikipedia Tool: Fetches concise summaries using the Wikipedia REST API.
- Math Tool: Minimal, restricted
eval
for basic arithmetic.
Orchestrator
- Flow: research β quiz β explain β END
- Checkpointer: in-memory
- API:
run_session(topic)
: Executes the entire pipeline and returns results.
orchestrate_session()
: CLI wrapper.
- Reliability: Strong prompting and retry/guard logic; deterministic fallbacks to avoid empty outputs.
User Interface
- Sidebar: Topic input, model display (auto-set from
.env
), run button.
- Tabs:
- Research
- Quiz
- Explanations
- Raw (shows returned JSON)
- Theming: Custom Streamlit palette via configuration.
Configuration
Environment Variables (.env
)
GROQ_API_KEY=your_groq_api_key_here
GROQ_MODEL=llama-3.3-70b-versatile
LANGSMITH_API_KEY=your_langsmith_key_here
LANGSMITH_PROJECT=edu-sync-agents
LANGCHAIN_TRACING_V2=true
LANGCHAIN_ENDPOINT=https://api.smith.langchain.com
Easily switch Groq models by changing GROQ_MODEL
βno code edits needed.
Installation & Running
0. Clone Repository
git clone https://github.com/narevignesh/edu-sync-agents.git
### 1. Setup Virtual Environment
python -m venv env
# Activate (Windows)
env\Scripts\Activate
# Activate (Unix/Mac)
source env/bin/activate
pip install -r requirements.txt
cp .env_example .env
# Fill in .env with your keys
2. CLI Run
python main.py
3. Streamlit UI
streamlit run app.py
Data Flow
- User enters topic in UI.
- Research Agent composes explanation using LLM and Wikipedia summary.
- Quiz Agent generates 5 Q/A pairs from the research summary.
- Explainer Agent provides detailed explanations for each Q/A pair.
- UI displays all sections and raw output; traces are sent to LangSmith.
- Fallbacks are used if any step fails to ensure continuous output.
Reliability & Fallbacks
- LLM prompts are guarded and responses are retried as needed.
- If any model response is empty:
- Research: Uses Wikipedia summary or templated topic overview.
- Quiz: Provides deterministic Q/A pairs for the topic.
- Explain: Uses a generic rationale template.
- If LangGraph returns empty results, a sequential fallback pipeline is invoked.
Security and Safety
- All API keys and secrets are loaded only from environment variables in
.env
(never hard-coded).
math_tool
uses restricted evaluation for security.
- Wikipedia tool only uses public, read-only API requests.
Limitations
- Fallbacks are intentionally generic, serving as last-resort outputs.
- Wikipedia coverage may be limited for certain niche topics.
- No persistent data storage, memory checkpointer only.
Validation and Testing
- Manual validation across diverse topics.
- Trace review in LangSmith to confirm proper agent handoffs and tool usage.
- Sanity checks for both Wikipedia and math tool behavior.
- Groq LLMs deliver low-latency completion.
- You can switch LLM models for optimal trade-off between quality and speed.
- Add Wikipedia summary caching to boost speed and handle API limits.