Marouf Chatbot is an intelligent question-answering assistant leveraging RAG (Retrieval-Augmented Generation) architecture. It integrates FAISS for vector search, LLMs for response generation, and Redis caching to provide fast, accurate, and context-aware answers. Built with FastAPI and Streamlit, this chatbot ensures smooth interaction and seamless deployment via Docker.
Feature | Technology | Benefit |
---|---|---|
Context-aware Q&A | FAISS + Sentence Transformers | response accuracy |
Conversational Flow | Deepseek-Llama-70B | Human-like responses |
Persistent Memory | PostgreSQL | Session continuity |
High Performance | Groq LPU | 300 tokens/sec |
Caching System | Redis | 68% cache hit rate |
pie title Technology Distribution "NLP Processing" : 35 "Database" : 25 "API Server" : 20 "Frontend" : 15 "DevOps" : 5
# Clone repository git clone https://github.com/Mkaljermy/marouf_chatbot.git cd marouf_chatbot # Setup environment cp scripts/.env.example scripts/.env nano scripts/.env # Add your API keys # Build and run docker-compose up --build -d
import requests response = requests.post( "http://localhost:8000/chat", json={"query": "What's the capital of France?"} ) print(response.json())
graph LR A[User] --> B[Streamlit] B --> C[FastAPI] C --> D{Redis?} D -->|Cache Hit| E[Return Response] D -->|Cache Miss| F[FAISS Search] F --> G[PostgreSQL] G --> H[LLM Processing] H --> C
marouf_chatbot/
āāā scripts/
ā āāā api/
ā ā āāā api.py
ā ā āāā embeddings.npy
ā ā āāā faiss_index.index
ā āāā cache/
ā ā āāā caching.py
ā āāā chatbot/
ā ā āāā chatbot.py
ā ā āāā embeddings.npy
ā ā āāā faiss_index.index
ā āāā frontend/
ā āāā index.py
āāā data/
ā āāā trivia_dataset.csv
āāā docker-compose.yml
āāā Dockerfile.api
āāā Dockerfile.frontend
āāā requirements.txt
chatbot.py
.trivia_dataset.csv
in the data/
folder.frontend/index.py
to update the Streamlit interface.For inquiries and contributions, contact Mohammad Aljermy:
Check out the full project on GitHub: Marouf Chatbot Repository