
Ensure your system meets the following requirements before running TalkLLM:
WebGPU-enabled browser
Recommended: Latest versions of Chrome, Edge, or Safari
Device with WebAssembly (Wasm) support
Node.js (v16 or above recommended)
npm or yarn package manager
git clone https://github.com/mahmud-r-farhan/TalkLLM.git cd TalkLLM npm install
npm run dev
Open your browser and navigate to:
š http://localhost:5173
TalkLLM uses:
Llama-3.1-8B-Instruct-q4f32_1-MLCThe model is initialized entirely in-browser with:
webllm.CreateMLCEngine(modelName, { initProgressCallback });
No server. No tracking. No external API.
Everything runs locally in your browser.
src/ āāā App.jsx # Main chat component āāā ui.scss # Custom UI styling
You can switch to a different model by editing:
const selectedModel = "Llama-3.1-8B-Instruct-q4f32_1-MLC";
Refer to the MLC Model Catalog for more available models.
Sass for styling (.scss support)
Install Sass if not already present:
npm install sass # or npm install sass-embedded