According to the global statistics of the World Health Organization (“WHO”), 188.5 million out of 7.8 billion people have mild vision impairment, 217 million people have moderate to severe vision impairment, and 36 million people are visually impaired. Evidently, vision impairment has a significant impact on the functioning of daily activities, including the ability to navigate and recognize the environment independently. Fortunately, apart from medical advancements, technological solutions have enabled the possibility of harnessing assistive tools to improve Visually Impaired People’s (VIP) quality of life and to offer them better integration into society. Based on existing literature, we recognise that assistive devices for VIP are a boon to their functional limitations. These devices help to support daily traveling and have high potential to improve VIP’s social inclusion.
A market opportunity presented itself to us when we discovered that there has been a dearth of an all-in-one solution for VIP’s at this juncture - one which promises to bring together all the features (such as text-to-read, object detection and navigation) in a single product to empower the VIP. Before we embark on our product/strategy creation, our team conducted an extensive background research on VIP, their families and friends, as well as caregivers and professionals in the ophthalmology field to understand some of the most desired characteristics of an assistive device. We have unravelled that family and caregivers of VIP’s require the product to be safe to use, have emergency warning systems to alert the owners through loaded user profile and provide timely information and warnings about the surrounding environment.
Armed with extensive market research, our team of 4 dynamic and dedicated solutionists voyaged on a journey to create an intelligent navigation and guidance control system to empower VIP and to assist in their daily mobility needs. As elaborated in the subsequently, we have developed a dynamic navigation system that recognizes objects and avoids collisions with path finding functionalities in both outdoor and indoor environment. Additionally, with our users in mind, we created a simple and intuitive front-end user interface with audio instructions provided throughout. This system uses a combination of the Support Vector Machine (SVM) and Principal Component Analysis (PCA) to load users profile using facial recognition to alert our VIPs family or caregiver during emergency.
Official Full Name | Student ID | Work Items |
---|---|---|
Mehta Vidish Pranav | A0213523U | Project Lead, Hardware Setup, Developed Object Detection, Deep Q Learning Reinforcement Agent, PCA+SVM Facial Recognition System, Core Django Frontend Framework, Optical Character Reccognition and auto speech synthesizers, Application Testing. Worked on mid project presentation, final project presentation and video and project documentations |
Anandan Natarajan | A0213514U | Documentation (Project Proposal), Mid and Final project presentation and report, Hardware Comparison, Development environment setup, outdoor navigation - object detection, tagging and TF record generation, Application testing |
Veda Nagavalli Yogeesh | A0213556H | Documentation (Project Proposal), Mid and Final project presentation and report, Image Preparing (sourcing, cleaning, tagging and TF Record generation), Image Caption, Face Reccognition, Application Testing |
Zhang Yu | A0213498X | Documentation (Project Proposal), Mid and Final project presentation and report, Indoor Navigation (ROS,VSLAM) design, development and testing |
<Github File Link>
: https://github.com/vid1994/Navcon/blob/master/Project%20Report/NavCon%20Project%20Final%20Report.pdf
[Navcon Demo] (https://youtu.be/XwRWvKAmZnk)