A Multi-Agent AI Framework for Automated System Generation
The Meta-Agentic System is an innovative framework that leverages multiple Large Language Models (LLMs) to automatically design, implement, test, and deploy custom multi-agent systems from high-level user prompts. Think of it as "AI that creates AI systems."
This system employs a sophisticated workflow with four specialized agents:
The orchestrator manages the workflow, including an iterative refinement loop that ensures high-quality output.
User Prompt β Planner β Coder β Tester β [Refine if needed] β Deploy
                                   β
                            (Success Check)
The system intelligently selects the best LLM for each task:
claude-3-opus-20240229 (Anthropic) - Architecture & planninggpt-4o (OpenAI) - Code generation & refinementgemini-1.5-pro-latest (Google) - Quality assurancemain.pyPurpose: Entry point for the application
Key Components:
Usage Example:
python main.py # Prompts: "What multi-agent system would you like to create?"
orchestrator.pyPurpose: Central coordinator managing state and agent workflow
Key Components:
Orchestrator class maintains global state across agentsState Management:
state = { "user_prompt": str, "plan": dict, "generated_code": dict, "test_results": dict, "error": str (if any) }
Workflow Logic:
agents.pyPurpose: Defines the four specialized agent classes
Agent Classes:
BaseAgent (Abstract Base Class)
name attribute and abstract run() methodPlannerAgent
project_name, agents_to_create, required_tools, workflow, dependenciesCoderAgent
main.py, agents.py, tools.py, requirements.txtTesterAgent
DeployerAgent
README.md for the new projectAgent Communication Pattern:
def run(self, state: dict) -> dict: # Process state # Call LLM # Update state return state
llm.pyPurpose: Unified interface for multiple LLM providers
Key Features:
config.pycall_llm() interfaceFunction Signature:
def call_llm(model_name: str, system_prompt: str, user_prompt: str) -> str
Model Routing:
"claude" β Anthropic API"gpt" β OpenAI API"gemini" β Google Generative AI APIError Handling: Returns descriptive error messages on API failures
prompts.pyPurpose: Centralized prompt templates for all agents
Prompt Definitions:
PLANNER_PROMPT
{{USER_PROMPT}}CODER_PROMPT
{{PLAN}}CODER_REFINEMENT_PROMPT
{{PLAN}}, {{PREVIOUS_CODE}}, {{TEST_REPORT}}TESTER_PROMPT
{{PLAN}}, {{CODE_TO_TEST}}Design Philosophy: Prompts are designed to produce structured, parseable JSON outputs while ensuring each agent understands its role and constraints.
config.pyPurpose: Secure storage for API credentials
Configuration:
API_KEYS = { "google": "YOUR_GOOGLE_API_KEY_HERE", "openai": "YOUR_OPENAI_API_KEY_HERE", "anthropic": "YOUR_ANTHROPIC_API_KEY_HERE" }
Security Note: Add config.py to .gitignore to prevent committing API keys
Required APIs:
requirements.txtPurpose: Python package dependencies
Dependencies:
google-generativeai>=0.8.0  # Google Gemini API
openai>=1.0.0               # OpenAI API
anthropic>=0.34.0           # Anthropic Claude API
Clone the repository
git clone <repository-url> cd mas
Create virtual environment (recommended)
python -m venv .venv source .venv/bin/activate # On Windows: .venv\Scripts\activate
Install dependencies
pip install -r requirements.txt
Configure API keys
Edit config.py and add your API keys:
API_KEYS = { "google": "your-actual-google-key", "openai": "your-actual-openai-key", "anthropic": "your-actual-anthropic-key" }
Run the system
python main.py
"Create a multi-agent research system where one agent searches for recent 
articles on AI safety, another agent summarizes them, and a third agent 
creates a markdown report."
generated_research_system/
βββ README.md
βββ requirements.txt
βββ main.py           # Orchestrator for the research system
βββ agents.py         # ResearchAgent, SummaryAgent, ReportAgent
βββ tools.py          # web_search(), save_to_file()
The system includes a sophisticated feedback loop:
This ensures high-quality, working implementations.
Contributions welcome! Please ensure: