This project demonstrates how to run a local Large Language Model (LLM) using Ollama and integrate it with a Spring Boot 3.4.2 application using DeepSeek-R1. No need for cloud APIs โ run everything locally for maximum performance and data privacy.
The primary objective was to demonstrate a seamless integration of the DeepSeek-R1 model using Ollama, highlighting:
Here's what the chat UI looks like when the Angular frontend is running:
curl -fsSL https://ollama.com/install.sh | sh ollama serve
ollama pull deepseek-r1:1.5b
# Clone the repository git clone https://github.com/HenryXiloj/demo-ollama-deepseek-r1 cd demo-ollama-deepseek-r1 # Clean, build, and run the application mvn clean install spring-boot:run
Note: The Maven build will:
- Install Node.js and npm
- Install frontend dependencies
- Build the Angular frontend
- Compile and run the Spring Boot backend
๐ demo-ollama-deepseek-r1/
โโโ angular-ui/
โ โโโ src/app/
โ โโโ app.component.ts
โ โโโ app.component.html
โ โโโ app.config.ts
โ โโโ chat/
โ โโโ chat.component.ts
โ โโโ chat.component.html
โ โโโ chat.component.scss
โ
โโโ src/main/java/com/henry/ollama/
โ โโโ Application.java
โ โโโ config/OllamaProperties.java
โ โโโ controller/ChatController.java
โ โโโ record/OllamaRequest.java
โ โโโ record/OllamaResponse.java
โ โโโ service/OllamaService.java
After starting the application, you can test the chat endpoint:
curl -X POST -H "Content-Type: text/plain" \ -d "Explain AI in simple terms" \ http://localhost:8080/api/chat
sudo ufw enable sudo ufw allow 11434 sudo systemctl stop ollama sudo lsof -i :11434 export OLLAMA_HOST=0.0.0.0 ollama serve
$wsl_ip = (wsl hostname -I).Split()[0] netsh interface portproxy add v4tov4 listenport=11434 listenaddress=0.0.0.0 connectport=11434 connectaddress=$wsl_ip New-NetFirewallRule -DisplayName "Ollama-WSL" -Direction Inbound -Protocol TCP -LocalPort 11434 -Action Allow
๐ Author
Created with โค๏ธ by Henry Xiloj
๐ GitHub Repo: github.com/HenryXiloj/demo-ollama-deepseek-r1
๐ Blog: jarmx.blogspot.com