SmartSight is an image recognition to speech app built for the visually impaired using CoreML and machine learning models. The app is capable of detecting objects in the live camera feed in real time, and based on the accuracy percentage convert the object description into speech. The goal is to provide a safer working environment to the visually impaired. The CoreML model used to power SmartSight is MobileNet and ResNet50.
I'm working on training my own CoreML model using TensorFlow and will replace it with pre-trained model once my model is trained for over 1000 categories of daily-home based objects and items.