Gesture Bridge is an application that enables two-way communication between hearing-impaired and hearing users by converting:
- ✋ Sign Language → Text
- 🎙️ Voice → Sign Language (GIFs & Letter Visuals)
The system bridges the communication gap using Computer Vision, Speech Recognition, and Deep Learning.
Gesture Bridge integrates real-time hand gesture recognition with voice-to-sign visualization to create an inclusive communication platform.
- Uses a camera to detect hand landmarks and recognize gestures.
- Converts recognized gestures into readable text.
- Converts spoken words into sign language GIFs or letter-by-letter visual actions.
- Designed with a simple GUI for easy interaction.
- Real-time hand gesture detection using MediaPipe.
- ASL/ISL gesture classification using a Deep Learning model.
- Displays detected gesture labels on-screen.
- Supports dataset creation and model retraining.
-
Converts live voice input into text using Speech Recognition.
-
If detected text matches predefined dictionary phrases:
- Displays the corresponding Sign Language GIF.
-
If not:
- Breaks the word into letters.
- Displays letter-by-letter sign images sequentially.
-
Automatically exits when the user says “goodbye”.
-
Simple and interactive GUI using EasyGUI.
-
Central menu with two modes:
- Sign to Text
- Voice to Sign
Gesture Bridge
│
├── Sign to Text
│ └── Camera → Hand Detection → Gesture Classification → Text Output
│
├── Voice to Sign
│ └── Speech → Text
│ ├── If phrase in dictionary → Display GIF
│ ├── Else → Display letter visuals sequentially
│ └── If “goodbye” → Exit
│
└── Exit
- Python 3.10
- OpenCV – Camera handling & visualization
- MediaPipe – Hand landmark detection
- TensorFlow – Gesture classification model
- SpeechRecognition – Voice input processing
- EasyGUI – GUI menu and dialogs
- Pillow – Image & GIF handling
- NumPy, Pandas, Scikit-learn
- Matplotlib, Seaborn – Visualization & debugging
Create a virtual environment and install dependencies using:
numpy==1.26.4
opencv-contrib-python==4.8.1.78
mediapipe==0.10.9
tensorflow-intel==2.16.1
protobuf==3.20.3
Pillow
pandas
scikit-learn
matplotlib
seaborn
easygui
SpeechRecognition
⚠️ Note: >tkintercomes pre-installed with Python on Windows and should NOT be added torequirements.txt.
git clone https://github.com/Bhavya0420/Gesture_Bridge.git
cd Gesture_Bridgepython -m venv venv
venv\Scripts\activatepip install --upgrade pip
pip install -r requirements.txtpython app.py- Select Sign to Text from the main menu.
- Show hand gestures in front of the camera.
- Detected gestures will be classified and displayed as text.
- Press ESC to exit.
-
Select Voice to Sign.
-
Choose Live Voice.
-
Speak clearly into the microphone.
-
Output:
- Phrase GIF (if predefined).
- Letter-by-letter sign images (if not).
-
Say “goodbye” to exit automatically.
Gesture_Bridge/
│
├── app.py
├── sign_to_text.py
├── voice_to_sign.py
│
├── assets/
│ ├── logo.png
│ ├── voicetosign.png
│ ├── letters/
│ └── ISL_Gifs/
│
├── model/
│ ├── dataset/
│ └── keypoint_classifier/
│
├── utils/
│ └── cvfpscalc.py
│
├── requirements.txt
└── README.md
You can retrain the gesture classifier using your own dataset:
-
Capture hand landmarks using:
k→ Manual logging moded→ Dataset-based logging mode
-
Data stored in:
model/keypoint_classifier/keypoint.csv -
Train using the provided Jupyter Notebook.
- Support for full sentence sign animation
- Multilingual speech input
- Mobile & web-based deployment
- Enhanced dataset for more gestures
- Real-time text-to-sign avatar
Bhavya Sree 🔗 GitHub: Bhavya0420