Guiding Visually Impaired People to Find an Object by Using Image to Speech over the Smart Phone Cameras
This is an engineering thesis project. It's forked from TensorFlow Lite Object Detection Android Demo
-
If you don't have already, install Android Studio, following the instructions on the website.
-
You need an Android device and Android development environment with minimum API 21.
-
Android Studio 3.2 or later.
-
Open Android Studio, and from the Welcome screen, select Open an existing Android Studio project.
-
From the Open File or Project window that appears, navigate to and select the tensorflow-lite/examples/object_detection/android directory from wherever you cloned the TensorFlow Lite sample GitHub repo. Click OK.
-
If it asks you to do a Gradle Sync, click OK.
-
You may also need to install various platforms and tools, if you get errors like "Failed to find target with hash string 'android-21'" and similar. Click the Run button (the green arrow) or select Run > Run 'android' from the top menu. You may need to rebuild the project using Build > Rebuild Project.
-
If it asks you to use Instant Run, click Proceed Without Instant Run.
-
Also, you need to have an Android device plugged in with developer options enabled at this point. See here for more details on setting up developer devices.
Downloading, extraction and placing it in assets folder has been managed automatically by download.gradle.
If you explicitly want to download the model, you can download from here. Extract the zip to get the .tflite and label file.
Please do not delete the assets folder content. If you explicitly deleted the files, then please choose Build->Rebuild from menu to re-download the deleted model files into assets folder.