ā Model-Agnostic ā Works with any Ollama model (Llama3, Qwen-2.5-Coder, Mistral, etc.).
ā Auto-Detection ā Detects and lists installed models without extra setup.
ā User-Friendly ā Chat-style interface for smooth interaction.
app.exe
).š Source Code: GitHub Repository
š Installation Guide: Setup Instructions
For inquiries, support, or contributions, reach out to:
contact@mohsinnawaz.one
Local LLMs like Ollama are powerful but require CLI knowledge, creating challenges for:
Despite the advancements in local LLMs, several challenges persist:
After testing existing Ollama CLI workflows, we identified key limitations:
To address these issues, our Local LLMs, Now With Buttons! introduces:
ā
One-Click Model Selection ā No need to remember model names.
ā
Intuitive Chat Interface ā Simplifies LLM interactions.
ā
Auto-Detection of Installed Models ā Instantly detects available LLMs.
ā
Real-Time Query Execution ā Allows users to run multiple queries seamlessly.
By bridging these gaps, our GUI empowers users with a more efficient and accessible way to leverage local LLMs. š
ā
One-click model selection ā Supports all Ollama-compatible models.
ā
Interactive Chat Interface ā Conversational responses with async processing.
ā
Zero Configuration ā Auto-detects installed models, no setup required.
Below is a preview of the LMS on Local GUI interface:
This project is actively maintained with regular updates and improvements. Contributions from the community are welcome via pull requests and issue reporting.
v1.0.0
Component | Technology Used |
---|---|
Language | Python 3.7+ |
GUI Framework | Tkinter |
Backend | Ollama CLI via subprocess |
Concurrency | Python threading |
Package Management | pip, virtualenv |
The dataset consists of various model interactions, including:
Model | Task Type | Key Metric | Comparison to GPT-4 |
---|---|---|---|
Llama3 | General Q&A | Response Accuracy | Matched GPT-4 (92%) |
Qwen-2.5-Coder | Python Debugging | Code Fix Success Rate | Outperformed GPT-4 (95% vs 88%) |
Feature | CLI (Ollama Terminal) | GUI (LMS on Local) |
---|---|---|
Ease of Use | Requires CLI knowledge | One-click model selection |
Learning Curve | ~15 mins for new users | Under 2 mins |
Execution Time | 2.1 mins avg | 1.30 mins avg (faster) |
Model Switching | Manual input required | Auto-detect & switch instantly |
Debugging Code | Manual checking | Inline response with corrections |
š©āš» Participants: 5 developers & researchers
š Tasks:
ā
40% faster task execution (GUI reduced task time from 2.1 mins ā 1.30 mins).
ā
New users learned the GUI 7x faster (under 2 minutes).
ā
90% preferred GUI over CLI for debugging and model switching.
ā ļø Limitations:
Metric | Value |
---|---|
Reported Time Savings | 10% faster workflows |
š¢ "Has others but this is fast!"
ā Python Developer, Professional Group
šØāš» Developers ā Rapidly test code snippets across models.
š¬ Researchers ā Compare LLM outputs side-by-side.
š Students ā Learn LLM capabilities without CLI anxiety.
š Local LLMs, Now With Buttons!, simplifies local LLM access by:
š¹ Model Management ā Install/delete models via GUI.
š¹ Multi-Model Chat ā Compare outputs side-by-side.
š¹ Logging & Export ā Allow saving chat history for later analysis.
š¹ Future updates ā Automated logging and performance analytics.
To ensure smooth operation and stability, the following monitoring practices are recommended:
# Install Ollama CLI & a model ollama install mistral # Run LMS on Local ./app.exe