This project presents a production-ready AI chatbot that leverages the Groq LLaMA 3.1 model, built on a modular Python backend and a Streamlit frontend. The system transforms a working prototype from Module 2 into a stable, user-ready application with enhanced safety, testing, and operational features. Key capabilities include:

AI chatbots often face challenges in providing secure, real-time interactions while remaining user-friendly. Prototypes built during early development phases may lack robust error handling, session management, and production-grade architecture.
The Groq Chatbot addresses these issues with:
Backend Module (llm.py): Handles API calls to Groq, encapsulating prompt submission and error handling
Frontend Module (app.py): Streamlit UI with chat bubbles, session storage, and real-time response display
Production Enhancements: Input validation, graceful error handling, logging, and modular design for easy scaling.
βββββββββββββββββ βββββββββββββββββ βββββββββββββββ
β Streamlit UI ββββββββΆβ run_llm() ββββββββΆβ Groq API β
β - Text Input β β - Python SDK β β - LLaMA 3.1 β
β - Chat Bubblesβ β - Prompt Exec β β Model β
βββββββββββββββββ βββββββββββββββββ βββββββββββββββ
β²
β
β
Session State Storage (Chat History)
Python 3.11+: Core programming language.
Streamlit 1.40+: Interactive web interface.
Groq Python SDK: LLaMA 3.1 API integration.
dotenv: Environment-based API key management.
Backend logic decoupled from frontend
run_llm() function reusable across projects
Input validation and exception handling
Styled chat bubbles and session persistence
load_dotenv() GROQ_API_KEY = os.getenv("GROQ_API_KEY") client = Groq(api_key=GROQ_API_KEY) def run_llm(prompt: str, max_tokens: int=300): response = client.chat.completions.create( model="llama-3.1-8b-instant", messages=[{"role": "user", "content": prompt}], max_tokens=max_tokens ) return response.choices[0].message.content
Securely loads API key.
Sends user prompts to LLaMA 3.1 and returns AI-generated content.
Modular and reusable.
st.text_area("Your Prompt:") if st.button("Send"): response = run_llm(prompt) st.session_state.chat_history.append(("You", prompt)) st.session_state.chat_history.append(("AI", response))
Accepts user input and displays AI responses.
Maintains session-based chat history.
Uses HTML-styled chat bubbles for readability.
st.set_page_config() sets title & icon.
Prevents empty prompts.
Maintains chat history per user session.
Displays warnings for invalid input.
Styled Chat Bubbles:
Differentiates user and AI messages.
Real-time spinners indicate AI is processing.
Warnings for empty input or failed responses.
Users can continue conversations without losing history.
streamlit run app.py.
Streamlit Cloud or other hosting platforms.
.env for API key management.
Move session state to a database for multi-user support; batch API calls for high-load scenarios.
The Groq Chatbot demonstrates a production-ready AI application combining backend LLM integration with a Streamlit frontend. With modular design, session persistence, input validation, and graceful error handling, it represents a stable, safe, and user-ready system suitable for real-world deployment.