This approach presents a modular and intelligent multi-agent feedback management system designed using CrewAI and a routing-based design pattern. The system intelligently routes user queries based on category—"General," "User Suggestions General," or "User Suggestions Reporting"—and invokes agents only when necessary to optimize performance and response time. Built using FastAPI and integrated with LiteLLM and GPT-based models, this structured workflow ensures lightweight handling of basic queries while dynamically assembling a multi-agent crew for report generation tasks. The agents are assigned specific roles like listener, cleaner, classifier, and reporter, leading to a detailed and stylized HTML report from user feedback. This architecture offers a scalable and intelligent way to automate user feedback analysis and reporting.
Effective feedback management is crucial for continuous product improvement. However, as the volume and complexity of feedback grow, traditional systems struggle to prioritize, categorize, and present insights in a meaningful way. To address this challenge, we developed a smart, agent-driven feedback manager that selectively applies AI resources depending on the query complexity. Utilizing CrewAI, a framework for structured multi-agent collaboration, and the routing design pattern, this system ensures efficiency by avoiding unnecessary agent calls for simple queries, while dynamically deploying agents for in-depth feedback reporting tasks.
The architecture follows a structured routing workflow where incoming user prompts are analyzed and categorized before any agent is invoked. This method reduces latency and computational overhead.
The user input is passed to an LLM (via LiteLLM) which classifies the query into one of three categories:
Based on the category
If General or User Suggestions General, the response is generated directly using LiteLLM with no agent invocation.
If User Suggestions Reporting, a multi-agent crew is launched to generate a detailed report.
When agent invocation is needed, the following agents and tasks are executed in sequence:
We evaluated the system using three example prompts to assess routing accuracy and agent collaboration:
User Prompt | Expected Route | Agents Invoked |
---|---|---|
"Hello, who are you?" | General | None |
"I want to implement top user suggestions" | User Suggestions General | None |
"Show me a detailed report of last 5 user suggestions" | User Suggestions Reporting | All (6 agents) |
Additionally, the HTML report generation was validated by checking the following:
The system successfully categorized and routed user inputs with 100% accuracy in the tested scenarios.
For reporting-related queries, the agents generated clean, detailed HTML reports with:
Performance benchmarks showed:
This project demonstrates the practical efficiency of combining intelligent routing with modular multi-agent design for scalable feedback processing. By integrating LiteLLM for lightweight responses and CrewAI for collaborative tasks, the system maintains a balance between performance and depth. The modular structure allows easy extension—such as adding evaluators or new tools—and can be adapted for other domains needing structured human-like collaboration. Future work will focus on extending this to real-time dashboards and adaptive feedback loops.
There are no models linked
There are no models linked
There are no datasets linked
There are no datasets linked