RealSign is a computer vision application designed to interpret sign language gestures into text. Utilizing the YOLOv11 architecture, the system provides real-time inference capabilities suitable for accessibility tools and communication interfaces.
- Application Name: RealSign
- Model Architecture: YOLOv11 (Nano)
- Deployment Environment: Streamlit / Hugging Face Spaces
- Primary Function: Optical Character Recognition for Hand Gestures (A-Z)
- Real-Time Inference: Low-latency processing of video frames for immediate feedback.
- Dual Input Modes: Supports both direct webcam feed and static image file uploads.
- Confidence Metrics: Displays probability scores for each detection to ensure reliability.
- Responsive Interface: Professional UI design adaptable to various display resolutions.
The system is built upon a modular Python stack:
- Core Inference Engine: Ultralytics YOLOv11 trained on a dataset of 87,000+ annotated images.
- Frontend Framework: Streamlit for web-based rendering and state management.
- Image Processing: OpenCV and PIL for matrix manipulation and pre-processing.
- Python 3.9 or higher
- pip package manager
-
Clone the Repository
git clone [https://github.com/YOUR_USERNAME/RealSign.git](https://github.com/YOUR_USERNAME/RealSign.git) cd RealSign -
Install Dependencies
pip install -r requirements.txt
-
Execute Application
streamlit run app.py
This application is configured for deployment on containerized environments such as Hugging Face Spaces.
- System Dependencies: Requires
libgl1(configured inpackages.txt). - Python Dependencies: Listed in
requirements.txt.
This project is distributed under the MIT License.