A gesture-based, contactless volume control system that utilizes computer vision and machine learning techniques to track hand movements in real-time and adjust system audio accordingly. The system leverages OpenCV, MediaPipe, and Pycaw to achieve accurate hand tracking and seamless user interaction with the audio settings.
- Real-time Hand Tracking: Detects and tracks hand movements using MediaPipe’s powerful hand detection model.
- Gesture-based Volume Control: Adjust the system’s audio volume by moving your hand. The distance between your thumb and index finger controls the volume level.
- Cross-platform Compatibility: Works on Windows systems where
Pycawcan access audio control. - Contactless Interaction: Provides a touch-free method for adjusting audio volume, useful in situations requiring hygiene or convenience.
Here’s a demo of the system in action:
[https://www.linkedin.com/posts/harmeetkaur04_innovation-computervision-machinelearning-activity-7233705243224502273-7SIQ?utm_source=share&utm_medium=member_desktop]
- Hand Detection: The system captures video input from a webcam and uses MediaPipe to detect hand landmarks.
- Distance Measurement: It calculates the distance between the thumb and index finger to determine the volume level.
- Volume Adjustment: The calculated distance is mapped to system volume levels using Pycaw.
- Python 3.7+
- OpenCV
- MediaPipe
- Pycaw
-You can adjust sensitivity or change hand gesture configurations in the script under the volume_control.py file.
-Add custom hand gestures or modify the logic to suit your application.
Contributions are welcome! Please submit a pull request or open an issue if you find a bug or have a suggestion.
This project is licensed under the MIT License - see the LICENSE file for details.
-OpenCV
-MediaPipe
-Pycaw