<p align="center"> <img src="https://img.shields.io/badge/Python-3.9+-yellow?style=for-the-badge&logo=python" alt="Python Version"> <img src="https://img.shields.io/badge/flask-3.1.*+-red?style=for-the-badge&logo=flask&logoColor=white" alt="Flask Version"> <img src="https://img.shields.io/badge/LM_Studio-Compatible-informational?style=for-the-badge&logo=ai" alt="LM Studio Compatible"> </p>
A RESTful Flask service that integrates with a local Large Language Model (LLM) (via LM Studio) to aid in English teaching and learning. It offers grammar checks, word translations with contextual examples, and LLM service status checks.
Teacher English aims to simplify and streamline the grammar assessment and vocabulary consultation process for English learners and teachers . By integrating a local LLM, the project offers a self-contained and efficient solution for instant grammar feedback and vocabulary enrichment with practical examples, making learning and teaching more interactive and accessible.
The Teacher English exposes the following API endpoints:
Grammatical Evaluation (/api/grammar/evaluate):
Word Translation and Context (/api/translate/word):
LLM Status (/api/llm/status):
Follow the steps below to configure and run Teacher English on your machine.
Make sure you have the following installed:
bartowski/llama-3.2-1b-instruct or phi-3-mini-4k-instruct-GGUF. You can download them directly from LM Studio.Clone the Repository:
git clone [https://github.com/seu-usuario/teacher_english.git](https://github.com/seu-usuario/teacher_english.git) # Substitua pelo seu usuário/repositório cd teacher_english
Create and Activate the Virtual Environment:
python -m venv .venv # No Windows (PowerShell): .venv\Scripts\Activate.ps1 # No Windows (CMD): .venv\Scripts\activate.bat # No macOS / Linux: source .venv/bin/activate
Install Dependencies:
pip install Flask python-dotenv pydantic openai
Or, if you have already generated a requirements.txt:
pip install -r requirements.txt
Create the Environment Variables File (.env):
In the project root (teacher_english/), create a file named .env with the following content. Adjust LLM_MODEL_NAME to the exact name of the model you loaded into your LM Studio.
# .env # Variáveis de Ambiente para a aplicação teacher_english # --- Configurações do LLM --- # URL base do servidor de inferência do LM Studio. # Geralmente é localhost na porta 1234. LLM_BASE_URL=http://localhost:1234/v1 # Chave de API para o LM Studio. Para uso local, qualquer string serve. LLM_API_KEY=lm-studio_api_key_local # Nome do modelo LLM carregado no LM Studio. # AJUSTE ESTE VALOR para o nome EXATO do modelo que você carregou. # Ex: gemma-2-2b-it, lmstudio-community/bartowski/llama-3.2-1b-instruct, etc. # Você pode encontrar este nome na aba 'Local Inference Server' do LM Studio. LLM_MODEL_NAME=bartowski/llama-3.2-1b-instruct
Create the __init__.py Files:
Make sure that the files __init__.py (which may be empty) exist in each subdirectory of src/ so that Python recognizes the structure as a package. The structure should be as follows:
teacher_english/
├── .venv/
├── src/
│ ├── __init__.py <-- Adicione aqui
│ ├── core/
│ │ ├── __init__.py <-- Adicione aqui
│ │ ├── entities.py
│ │ └── use_cases.py
│ ├── adapters/
│ │ ├── __init__.py <-- Adicione aqui
│ │ └── llm_adapter.py
│ ├── infrastructure/
│ │ ├── __init__.py <-- Adicione aqui
│ │ └── llm_api_client.py
│ ├── web/
│ │ ├── __init__.py <-- Adicione aqui
│ │ └── controllers.py
│ └── main.py
├── .env
├── requirements.txt
└── README.md
You can create these empty files using touch __init__.py (Linux/macOS) or type nul > __init__.py (Windows CMD) in each corresponding directory.
bartowski/llama-3.2-1b-instruct). Save it to an SSD for better performance.1234 (or adjust the LLM_BASE_URL variable in your .env if it is different).With your virtual environment activated and LM Studio running the local inference server, run the Flask application from the project root :
(.venv) python -m src.main
The application will be accessible at http://127.0.0.1:5000/
All endpoints are prefixed with /api.
POST /api/grammar/evaluate{ "phrase": "I has a new car and very happy." }
{ "comentarios": "A frase foi corrigida para garantir concordância verbal e uso correto do adjetivo. 'Has' foi corrigido para 'have' e 'very happy' para 'very happy'.", "erros_encontrados": [ "Concordância verbal (has -> have)", "Uso incorreto de advérbio/adjetivo (very happy -> very happy)" ], "frase_corrigida": "I have a new car and I am very happy.", "frase_original": "I has a new car and very happy.", "percentual_assertividade": 75.0 }
POST /api/translate/word{ "word": "apple" }
{ "palavra_original": "apple", "sugestoes_frases": [ "She loves to eat an apple every morning. (Ela adora comer uma maçã todas as manhãs.)", "The new store sells only the best apples. (A nova loja vende apenas as melhores maçãs.)", "An apple a day keeps the doctor away. (Uma maçã por dia mantém o médico afastado.)" ], "tipo_gramatical": "substantivo", "traducao_pt": "maçã" }
GET /api/llm/status{ "mensagem": "LLM está ativo e respondendo.", "modelo_carregado": "gemma-2-2b-it", "status": "ativo", "tempo_resposta_ms": 65.21 }
{ "mensagem": "Não foi possível conectar ao servidor do LLM. Detalhes: [Detalhes do erro de conexão]", "modelo_carregado": null, "status": "inativo", "tempo_resposta_ms": null }