In a world of technological advancements, addressing the communication needs of people who are deaf or mute is crucial. Leveraging the power of deep learning and computer vision, I aim to contribute to this cause by creating an innovative solution.
Gesture-AI, focuses on real-time gesture detection and classification, allowing seamless translation of hand signs and gestures to audio or text. This application has the potential to revolutionize communication for individuals with hearing or speech impairments.
Real-time Gesture Detection: Gesture-AI system instantly recognizes and interprets hand gestures, ensuring immediate and accurate communication.
Hand Sign Translation: Deaf or mute individuals can express themselves through hand signs, which are then translated into audio or text for the public.
Accessibility Enhancement: Striving to provide a tool that empowers deaf and mute individuals in their day-to-day lives, breaking down communication barriers.
Automatic Editors: Extending the capabilities of Gesture-AI, aim is to develop automatic text editors activated by hand gestures, further enhancing user convenience.
Primary goal is to make communication more accessible for the deaf and mute community. By developing Gesture-AI, hoping to:
Enable seamless communication between individuals with hearing or speech impairments and the general public.
Provide a practical and efficient tool for everyday communication and interactions.
Bridge the gap between deaf or mute individuals and businesses, such as restaurants and shopping malls, that lack specialized communication tools.
To try out Gesture-AI, real-time gesture detection and translation application, follow these steps:
Clone the repository: git clone https://github.com/Zarak-shah-ji/gesture-ai.git
Install the required dependencies: pip install -r requirements.txt
Run the application: python main.py
OR
Directly download asl-signing-reco-1.ipynb and open in Google Colab or Jupter Notebook
Start Kernel
Run all cells.
Committed to further developing Gesture-AI and exploring additional features:
Improving accuracy and robustness of gesture detection and classification.
Integrating automatic text editors for smoother translation and communication.
Developing mobile applications for increased accessibility.