Skip to content

A rover with facial recognition capabilities (Raspberry Pi + Python)

Notifications You must be signed in to change notification settings

morgan-park/rover

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Rover with Facial Recognition Capabilities

This is a rover with facial recognition capabilities that I developed for my capstone project in my Master’s degree in Computer Science. The rover detects a user's face and classifies them as either a child or an adult. For adults, the rover enables control through Blue Dot, an Android app installed on my tablet. If classified as a child, the rover moves backward and advises that permission from a parent or guardian is required.

capstone1

Hardware Design

The hardware design involves building the physical robot using Raspberry Pi. I integrated a camera, a speaker, wheels, motors, and motor drivers, etc., for user interaction and motion.

Software Design

The software design focuses on training machine learning models. I conducted experiments using transfer learning with Convolutional Neural Networks, specifically MobileNet V.2, developed by Google and implemented using TensorFlow. All code was written in Python. The main libraries used include TensorFlow, OpenCV, PiCamera2, NumPy, Matplotlib, libcamera, gpiozero, pygame, and BlueDot.

Code Details

1. transfer-learning-experiment.ipynb: This Python code uses TensorFlow to train three different models, plot their results, and compare them:

  • Transfer learning from MobileNet V.2 with only the output layer retrained.
  • Transfer learning from MobileNet V.2 with the last 30 layers retrained.
  • Custom-built CNN model.

For detailed model architecture, visit my website: Model architecture and performance of different models

2. rover-motion.py: Written in Python, this code runs on a Raspberry Pi. Using OpenCV and the face-recognition Python library, it enables the rover to recognize user faces, preprocess them for image recognition, and classify them using TensorFlow Lite. I integrated AI voice generated through narakeet.com, played using pygame, for the rover's communication with users. Gpiozero controls the rover's motion, and BlueDot allows adult users to control the rover via an Android tablet.

This code is designed for experimentation and modification for your specific purposes. The live camera window will pop up once you run the program, allowing you to view the result of facial recognition and interact with the code. The program will continue running; to quit, press 'q.'

3. run_program.py: For a smoother user experience, I created this program to operate the rover without displaying all the detailed and complex code. Users can simply execute this code, and the rover will run until interrupted by pressing Ctrl + C.

About

A rover with facial recognition capabilities (Raspberry Pi + Python)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published