AI Project Publication Assistant is a conditional multi-agent system designed to improve how AI and ML projects are presented for public sharing. While many projects are technically strong, they often suffer from unclear summaries, missing documentation sections, and poor discoverability. This system addresses that gap by analyzing a GitHub repository and producing grounded, explainable suggestions for improvement.
Unlike single-pass LLM tools, this project adopts a role-based multi-agent architecture using LangGraph. Each agent is responsible for a specific capability, enabling better control, reduced hallucination, and clearer reasoning.
---
This system is intentionally scoped to analysis and recommendation, not execution.
It does not modify repositories, generate code, or make claims about project correctness.
Inputs
Outputs
By keeping the scope limited to recommendations, the system remains predictable, auditable, and evaluator-friendly.
A monolithic LLM workflow is insufficient for tasks that require grounding, validation, and iterative refinement. This system separates responsibilities across three agents:
This separation improves reliability, modularity, and explainability.
LangGraph is used to model the system as a conditional state machine. Agents communicate through shared state, and the reviewer agent acts as a quality gate. If issues are detected, the system retries generation once before terminating. This bounded retry mechanism prevents infinite loops while still allowing quality-driven refinement.
LangGraph was chosen because it enables explicit state transitions, conditional routing, and debuggable agent workflows.
This makes agent coordination transparent and avoids implicit control flow often found in monolithic agent loops.
Several design choices are used to reduce hallucination and uncontrolled behavior:
These guardrails ensure that the system remains deterministic and explainable.
The system integrates multiple tools beyond basic LLM calls, including a GitHub repository reader, RAKE-style keyword extraction, heuristic README completeness scoring, and structured output generation in YAML and Markdown formats. This tool-augmented approach improves transparency and evaluator confidence.
Each run produces deterministic, reproducible artifacts that summarize repository understanding, writing suggestions, and review feedback.
Evaluation is performed using explicit signals, including:
This approach prioritizes correctness and explainability over purely generative fluency.
Current analysis focuses primarily on README files and uses heuristic scoring. Future improvements include deeper code analysis, LLM-based semantic review, batch processing of multiple repositories, and interactive user interfaces.