This project presents a Retrieval-Augmented Generation (RAG) Assistant that leverages LangChain, FAISS, and Gemini 2.5 Flash to answer user queries using a custom Wikipedia-based knowledge base.
The assistant dynamically retrieves relevant information and generates context-rich, accurate responses.
Key features include session-based memory, dynamic knowledge base initialization, robust error handling, and integrated logging.
This publication outlines the assistantβs purpose, technical architecture, and potential use cases in education, research, and professional settings.
The RAG Assistant is designed to provide accurate, context-aware responses by combining retrieval and generation techniques.
It grounds its answers in verified data using content retrieved from Wikipedia, making it ideal for:
The system ensures each answer is contextually relevant and factually supported.
Traditional AI assistants often produce hallucinated or generic responses.
This project mitigates that issue by retrieving trusted content before generating an answer.
Overall, this RAG system bridges the gap between retrieval precision and generative fluency, making it a valuable learning and research companion.
The assistant is built using a robust, modular tech stack:
Component | Purpose |
---|---|
LangChain | Orchestrates the retrieval β generation pipeline |
FAISS | Provides efficient vector-based document retrieval |
Gemini 2.5 Flash | Generates high-quality, context-aware responses |
WikipediaLoader | Fetches topic-specific Wikipedia content |
RecursiveCharacterTextSplitter | Splits large texts into manageable chunks |
Yes β the assistant is easy to set up and customize.
# Clone the repository git clone https://github.com/Fraol-D/RAG-Chatbot.git cd RAG-Chatbot # Install dependencies pip install -r requirements.txt # Run the app python main.py ## π More Documentation See detailed setup steps, examples, and troubleshooting in the [GitHub repository](https://github.com/Fraol-D/RAG-Chatbot). --- ## π§° Technical Implementation ```python # from knowledge_base import create_vector_store from pipeline import create_rag_chain import config # Step 1: Create the knowledge base vectorstore, _ = create_vector_store( topic=config.WIKIPEDIA_TOPIC, chunk_size=config.CHUNK_SIZE, chunk_overlap=config.CHUNK_OVERLAP ) if vectorstore: # Step 2: Build the RAG chain prompt_template = config.PROMPT_TEMPLATE.format(topic=config.WIKIPEDIA_TOPIC) rag_chain = create_rag_chain(vectorstore, prompt_template) # Step 3: Run a query question = "What are the achievements of Haile Gebrselassie?" response = rag_chain.invoke({"question": question}) print("Response:", response) else: print("Failed to initialize the knowledge base.")
A dynamic prompt template tailors responses to the current topic, ensuring contextual precision.
Author: Fraol Ashebir Demisse
Email: fraolashebir84@gmail.com
GitHub: https://github.com/Fraol-D
Project Repository: https://github.com/Fraol-D/RAG-Chatbot
Tags: LangChain
FAISS
RAG
Gemini 2.5 Flash
AI Assistant
Python
Machine Learning