Welcome to the Q-n-A Chatbot, an intelligent conversational AI designed to provide quick and accurate answers to user queries. Built using the LangChain framework and Groq API, this chatbot enhances user interaction by leveraging natural language processing, making information retrieval seamless and efficient.
The primary purpose of this chatbot is to improve user experience by delivering precise and relevant responses to queries. It aims to demonstrate the capabilities of large language models (LLMs) in real-world applications while facilitating easy integration for developers.
This project is aimed at AI researchers, developers looking to implement AI solutions, and students interested in understanding LLMs and chatbot development.
To set up the project locally, follow these steps:
Clone the repository:
git clone https://github.com/AlexKalll/Q-n-A-Chatbot.git cd Q-n-A-Chatbot
Install the required packages:
Make sure you have Python installed, then run:
pip install -r requirements.txt
Set up environment variables:
Create a .env
file in the root directory and add your API keys:
LANGCHAIN_API_KEY=your_langchain_api_key
GROQ_API_KEY=your_groq_api_key
To run the chatbot, execute the following command in your terminal:
streamlit run chatbot.py
This will open a new tab in your web browser where you can interact with the chatbot.
This chatbot can be utilized in various sectors:
While the chatbot offers advanced capabilities, it does have some limitations:
Here’s a brief overview of the main components of the code in chatbot.py
:
import os from langchain_core.prompts import ChatPromptTemplate from langchain_core.output_parsers import StrOutputParser from langchain_groq import ChatGroq import streamlit as st from dotenv import load_dotenv load_dotenv() langchain_api_key = os.getenv('LANGCHAIN_API_KEY') groq_api_key = os.getenv('GROQ_API_KEY') # Define prompt prompt = ChatPromptTemplate.from_messages([ ("system", "You are a helpful assistant"), ('user', "Question: {question}") ]) # Generate a response def generate_response(question, engine, temperature, max_token): llm = ChatGroq(model=engine, temperature=temperature, max_tokens=max_token) output_parser = StrOutputParser() chain = prompt | llm | output_parser answer = chain.invoke({"question": question}) return answer
Example Interaction:
Q: What types of questions can I ask the chatbot?
A: You can ask any question related to general knowledge, technology, or specific topics of interest. The chatbot is designed to provide informative answers.
Q: Can I customize the chatbot further?
A: Yes! You can modify the code to add more features or change the models used for response generation.
Q: How do I report issues or contribute to the project?
A: You can report issues or suggest improvements by opening an issue on the GitHub repository.
The chatbot can effectively answer a wide range of queries, demonstrating its versatility and adaptability to different topics.
The technologies used in this project, such as LangChain and Groq, are well-established in the AI community, ensuring the credibility of the methods employed.
For any inquiries or feedback, feel free to reach out:
Kaletsidik Ayalew
alexkalalw@gmail.com
There are no models linked
There are no models linked
There are no datasets linked
There are no datasets linked