This publication was originally written in Portuguese. You're viewing an automated translation into English.
Table of contents
basic-chatbot-langchain-ia
Chatbot with LLM Local (LM Studio, LangChain, Python)
π Project Description
This project presents a simple chatbot built in Python using the LangChain library for conversation orchestration. The unique and significant advantage of this chatbot is that it communicates with a Large Language Model (LLM) running locally on your machine, without the need for an internet connection for inference (after the initial model download).
The magic of running LLM locally is possible thanks to LM Studio, an intuitive tool that lets you download and serve open-source language models (like Gemma-2-2B-IT, which we're using here) directly on your computer, emulating an OpenAI-compatible API.
The main objective of this project is to demonstrate the ability to:
Set up a local AI development environment.
Integrate tools like LM Studio and LangChain.
Build an interactive chatbot application via terminal.
Explore the power of LLMs that can run on commodity hardware.
β¨ Features
Chatbot interaction via terminal.
Using an LLM running locally ( Gemma-2-2B-IT , configured via LM Studio).
Python virtual environment setup for dependency isolation.
Test function to check communication with the LLM server from LM Studio.
π οΈ Technologies Used
Python 3.9+
LM Studio: Tool to download and run LLMs locally.
LangChain: Framework for developing applications with LLMs.
langchain-openai: LangChain integration with OpenAI-compatible APIs (used to connect to LM Studio).
python-dotenv: To manage environment variables (LLM server URL).
π Environment Configuration and Execution
Follow the steps below to get the chatbot running on your Windows 11 machine.
In the left sidebar, click on the "Search" tab (magnifying glass icon).
In the search field, type gemma-2-2b-it and press Enter.
Look for a GGUF version of the model (e.g., gemma-2-2b-it-q4_k_m.gguf). Models with Q4_K_M or Q5_K_M offer a good balance between performance and RAM consumption.
Click the "Download" button next to the version you've chosen. Wait for the download to complete (it may take a while depending on your connection).
3. Starting the LLM Local Server in LM Studio
With the downloaded model:
In LM Studio, go to the "Local Inference Server" tab (two arrow icon, one up and one down).
In the left pane, make sure the template gemma-2-2b-it is selected in the dropdown.
Make sure the default port (1234) is set. Note the port if it is different.
Click the "Start Server" button.
You will see a message like "Serving on http://localhost:1234" if the server starts correctly.
Leave LM Studio running in the background while you configure Python.
4. Python Environment Setup
Clone this repository to your machine:
git clone https://github.com/SeuUsuario/NomeDoSeuRepositorio.git
cd NomeDoSeuRepositorio
(Replace SeuUsuario/NomeDoSeuRepositorio.git with the actual path to your repository).
Create a virtual environment (in the project root directory):
python -m venv .venv
Activate the virtual environment (on Windows):
.venv\Scripts\activate
You will see (.venv) at the beginning of the command line, indicating that the environment is activated.
Creating the file requirements.txt
Create a file named requirements.txt in the root of your project with the following content:
With the virtual environment activated, install the necessary libraries:
pip install -r requirements.txt
7. Creating the file .env
Create a file called .env in the root of your project (at the same level as main.py).
Add the following line to the file .env, adjusting the port if it is different from the default1234:
LM_STUDIO_BASE_URL=http://localhost:1234/v1
8. Running the Chatbot
With LM Studio serving the model and the Python environment configured:
In your terminal, make sure you are in the project root folder and that the virtual environment is activated.
Run the main script:
python main.py
Interaction
The chatbot will first attempt to connect to the LLM. If the connection is successful, it will begin the conversation loop.
Type your questions and press Enter.
To exit, type exit and press Enter.
π Project Structure
.βββ .env # VariΓ‘veis de ambiente (URL do servidor LLM)βββ main.py # Script principal do chatbotβββ requirements.txt # Lista de dependΓͺncias Pythonβββ README.md # Este arquivo
Table of contents
Chatbot with LLM Local (LM Studio, LangChain, Python)