Skip to content

DivSaru/Alohomora

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ALOHOMORA (Object Identification and Navigation System) uses model that is built using transfer learning from Google`s deep learning model Inception v3 which is internally trained by Google Inc . on 1000 categories supplied by the ImageNet (2012) with 79.3% confidence .

After training that model again with 10 GB of data using Transfer Learning and Apache Spark as training platform ; applying Tensorflow to classify images which must be send by our device Raspberry Pie (which is further integrated with Camera , Sensor , Speaker and Buzzer ) . Standard output here in the form of Voice(speech) So basically our input image is classified by our model to produce speech (voice) with 85 % confidence over 75.3% confidence of Google`s ImageNet

About

ALOHOMORA is a vision-based blind stick. It aims at providing the visually impaired people with an easyand handy object identification and navigation system. The model that we will be using for objectidentification has been trained on the images of famous objects/obstacles that can be encountered by a visually impaired person in everyday life. T…

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors