Skip to content

PythonicPete/SentinelVision

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🚀 Key Features Multi-Modal Tracking: Simultaneously processes facial bounding boxes and 42-point hand skeletal landmarks without dropping frame rates.

Complex Spatial Logic: Uses Euclidean distance calculations and Y-coordinate axis mapping to accurately identify both single-hand and multi-hand gestures.

Recognized Gestures:

🙏 Namaste (Multi-hand): Calculates distance between wrists, index fingers, and pinkies for precise dual-hand greeting detection.

✌️ Peace Sign: Single-hand gesture logic (Index/Middle UP, Ring/Pinky DOWN).

🖕 Threat Detection: Custom single-hand logic targeting hostile gestures.

Automated Evidence Capture: Upon threat detection, the system triggers a 10-second cooldown cycle, captures the frame, and saves a timestamped image to a local Security_Logs database.

Alert Integration: Modular architecture ready for SMTP email transmission (notify.py) to alert owners of security breaches.

🛠️ Architecture & Tech Stack Language: Python 3.x

Computer Vision: OpenCV (cv2)

Deep Learning Engine: MediaPipe Tasks API (hand_landmarker.task)

Math Operations: Pythagorean theorem via Python's math.hypot

About

A real-time, multi-modal AI security engine that uses spatial hand-tracking and facial tripwires to detect threat gestures and automate evidence capture.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages