This project is a multi-agent AI assistant built using LangGraph framework. It demonstrates how multiple agents can collaborate to extract topics from user queries, fetch relevant news, and generate concise summaries. The system is implemented in Python and leverages HuggingFace Transformers, LangGraph, and custom tools for a robust workflow.
The assistant currently runs as a console-based application, demonstrating multi-agent orchestration without requiring a UI.
intent_agent
news_agent
get_news
to retrieve articles dynamically.summarizer_agent
Implements a StateGraph to connect agents:
Intent β News β Summarizer
Agents communicate via a typed state dictionary (NewsState
) to maintain structured data flow.
Console outputs track workflow progress, providing feedback for each step.
Python 3.x
Transformers & HuggingFace Pipeline: FLAN-T5-small for local LLM inference
LangGraph: Prebuilt REACT agents & StateGraph orchestration
Custom Tools:
detect_topic_llm
: Topic extractionget_news
: Fetch relevant newssummarize_articles
: Article summarizationfrom transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline from langchain_huggingface import ChatHuggingFace from langchain_huggingface.llms import HuggingFacePipeline device = 0 if torch.cuda.is_available() else -1 model_name = "google/flan-t5-small" tokenizer = AutoTokenizer.from_pretrained(model_name) base_model = AutoModelForSeq2SeqLM.from_pretrained(model_name) pipe = pipeline("text2text-generation", model=base_model, tokenizer=tokenizer, device=device) llm_pipeline = HuggingFacePipeline(pipeline=pipe) local_llm = ChatHuggingFace(llm=llm_pipeline)
from typing import TypedDict, List class NewsState(TypedDict): user_input: str topic: str articles: List[dict] summaries: List[dict]
intent_agent = create_react_agent(model=local_llm, tools=[detect_topic_llm], prompt="Extract topic keyword") news_agent = create_react_agent(model=local_llm, tools=[get_news], prompt="Fetch relevant news") summarizer_agent = create_react_agent(model=local_llm, tools=[summarize_articles], prompt="Summarize articles")
graph = StateGraph(NewsState) graph.add_node(intent_node) graph.add_node(news_node) graph.add_node(summarizer_node) graph.add_edge("intent_node", "news_node") graph.add_edge("news_node", "summarizer_node") graph.set_entry_point("intent_node") graph.set_finish_point("summarizer_node")
initial_state = NewsState(user_input="Get me the latest news about Netflix", topic="", articles=[], summaries=[]) compiled_graph = graph.compile() final_state = compiled_graph.invoke(initial_state)
The complete working code for this Multi-Agent LangGraph News QA Assistant can be accessed on GitHub: