ChatbotIA is a proof-of-concept chatbot designed to run entirely for free on desktop devices equipped with GPUs by leveraging local language models (LLMs) directly in the user's browser.
The chatbot uses WebLLM to load and execute the model locally, eliminating the need for external API calls and removing any per-token costs. This approach ensures:
While the current version only supports desktop environments with GPU access, the project lays the groundwork for potential future hybrid implementations — where mobile devices could rely on external APIs (like OpenAI) while desktops continue using local inference to reduce token consumption and operational costs.
This makes ChatbotIA an ideal starting point for projects aiming to embed low-cost or zero-cost AI assistants into:
The project is built using:
The entire chatbot logic runs locally, making API calls unnecessary on supported devices.
At present, ChatbotIA is designed exclusively for desktop devices with WebGPU support (generally modern laptops and desktop PCs with compatible graphics cards).
Mobile devices, tablets, and browsers lacking WebGPU support are not compatible with the current version.
When a desktop user opens the application, ChatbotIA performs the following:
Since no API requests are made during chat interactions, this model offers:
Although the current implementation is desktop-only, the architecture could easily evolve into a hybrid system where:
This hybrid approach would minimize token costs by reserving cloud tokens for devices that cannot run models locally, while keeping desktops entirely free.
This flexibility makes ChatbotIA a strong candidate for:
The primary achievement of ChatbotIA is demonstrating that a fully functional chatbot can run locally in a browser without any reliance on external APIs, meaning:
This makes the system particularly attractive for companies building:
Once the LLM is loaded into the browser, the chatbot works offline — a critical advantage for:
This makes ChatbotIA especially useful for:
The clear limitation to desktop devices with GPUs ensures that ChatbotIA is optimized for environments where:
Although the current scope excludes mobile and tablet devices, the future hybrid concept (local for desktop, cloud for mobile) remains viable for projects requiring broader coverage.
ChatbotIA demonstrates that running LLMs locally in the browser is a feasible and cost-effective approach for desktop users. By completely eliminating API calls and per-token fees, it opens the door for:
While currently limited to desktop environments with GPUs, the system architecture could easily extend to a hybrid local+cloud model, enabling:
This blend of cost savings, offline support, and deployment flexibility makes ChatbotIA an ideal base for businesses seeking affordable, extensible AI solutions.
The full source code is available at:
👉 https://github.com/NatanaelAlexander/chatbotia
Try it here:
🔗 https://natanael-chat-bot-ia.vercel.app/
Feel free to reach out via GitHub if you’d like to contribute, request features, or explore additional use cases.