This project is an AI-powered application built to recognize American Sign Language (ASL) alphabet gestures in real-time. Using a combination of Python, OpenCV, and a custom-trained machine learning model, the app captures hand gestures via webcam, processes them, and displays recognized letters on the screen.
Check out the demo video of the ASL Recognition app in action:
My goal was to create an intuitive ASL alphabet recognition tool that could make American Sign Language more accessible to everyone. Although I started this project to push my machine learning skills, it soon became a meaningful way to contribute to accessible tech. This app interprets ASL alphabet gestures with decent real-time accuracy and responsiveness. While it's not perfect, I see this as a stepping stone toward a fully robust ASL recognition solution.
Data Capture: OpenCV captures frames from the webcam, which are then fed into the model for analysis.
Hand Landmark Extraction: Each frame is processed to isolate hand landmarks, and I implemented a normalization step to ensure x and y coordinates are consistent. This preprocessing step significantly improved the model's accuracy by making gesture inputs more uniform.
Real-Time Processing Optimization: To achieve low-latency, real-time performance, I optimized data flow from capture to prediction to minimize computational load.
To protect the integrity of this project and its future development, details about the specific model and training code have not been included in this repository. If you're interested in testing the full functionality of the app or exploring potential collaborations, please contact me at maximemartin510@gmail.com.
backend/
: Contains Django project files for API and model logic.frontend/
: Holds HTML, CSS, and JavaScript files for the UI.nginx/
: Configuration files to manage static files and improve server performance.docker-compose.yml
: Docker setup to manage project containers.Contributions are welcome! If you’d like to improve the model, UI, or any other part of the project, here’s how to get involved:
Creating this ASL recognition tool was both challenging and rewarding. Initially, it seemed straightforward, but real-time recognition required optimizing the model to reduce frame lag and improve accuracy. Normalizing hand landmarks was a critical breakthrough, allowing the model to handle different hand sizes and gestures consistently. Although the app performs well now, I’d like to keep improving it, especially the UI and extending its recognition capabilities beyond the alphabet.
Real-time machine learning doesn’t always require powerful hardware; it’s about carefully optimizing each part of the algorithm. Working on this has deepened my interest in AI and accessibility tech.
My next steps include refining the UI, optimizing the model further, and potentially expanding recognition to full ASL words and phrases.
Let’s connect if you’re interested in accessible tech or just want to chat about machine learning!
This project is licensed under the MIT License, encouraging open collaboration and sharing.
There are no models linked
There are no models linked