This project contains the code and resources for an American Sign Language (ASL) Recognition System, which uses computer vision to recognize ASL gestures in real-time. The project consists of data preprocessing, model training, and deployment for real-time ASL recognition via a graphical user interface.
The ASL Recognition System uses deep learning to identify and classify ASL gestures from real-time video input. The model supports letters A-Z and common actions like "delete," "nothing," and "space." The solution provides a complete pipeline, from data preprocessing to model training and deployment, making it easy to build and extend.
asl_model.h5
.├── asl_model.h5 # Trained model ├── labels.txt # Labels for ASL gestures ├── data_preprocessing.py # Script for data preprocessing ├── model_transform.py # Helper functions for model transformations ├── model_training.py # Model training script ├── test.py # Script for testing the model ├── real_time_detection.py # Real-time detection implementation ├── model_GUI.py # GUI for gesture recognition ├── LICENSE # License information
requirements.txt
)Clone the repository:
git clone https://github.com/idaraabasiudoh/asl-recognition-system.git cd asl-recognition-system
Install the dependencies:
pip install -r requirements.txt
Download any necessary datasets for training and preprocessing.
Run the preprocessing script to prepare your dataset:
python data_preprocessing.py
Train the ASL recognition model using:
python model_training.py
Evaluate the model performance:
python test.py
Launch the GUI for real-time gesture detection:
python model_GUI.py
The labels.txt
file contains the gesture classes:
A, B, C, ..., Z, delete, nothing, space
This project is licensed under the terms of the MIT License. See the LICENSE file for details.
Contributions are welcome! Feel free to fork the repository and submit a pull request with your enhancements.
Author: idaraabasiudoh