Skip to content

It's a object detection app. It's detect only πŸ…πŸ§…πŸ₯”πŸ§„. App use .ptl model to detect object. Give a Star 🌟If it helps you.

License

Notifications You must be signed in to change notification settings

sourabmaity/Vegetable-Detection_App

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

18 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Vegetable-Detection_App-

Overview

This tutorial provides step-by-step instructions on how to create an Android app using YOLO V5 .pt model and Android Studio. By following the tutorial, you will be able to use your Android app to detect objects through supervised machine learning.

This is an example application for PyTorch on Android. It uses image classification to continuously classify objects it sees from the device's back camera. Inference is performed using the PyTorch Lite Java API. The demo app classifies frames in real-time, displaying the top most probable classifications.

Tutorial Overview

  1. Data collection
  2. Train the model using collected image data
  3. Export machine learning model
  4. Incorporate model into an Android app

Requirements

Icon-

App Example PageApp Example Page

App Example PageApp Example Page

Create and train model

Step 1: Data collection

To train the machine learning model, we'll be using YOLO V5.

Begin by deciding which objects you plan to train your model on. Then, you can collect your data in two ways:

  1. Take photos of each object using your phone camera , and label them for each class.
  2. Take photos of multiple object in a single frame using your phone camera , and label them individually for each class.

Notes

  • Be sure to only include one object for each class (i.e. make sure there are no tometo images in the potato class and vice versa).
  • It is recommended that you take at least 100 image samples for each class and have at least 3 classes.
  • Your image samples should also ideally be from different angles.

Step 2: Train the model using collected image data

Run the script below to generate a custom model best.torchscript.pt located in runs/train/exp/weights:

python train.py --img 640 --batch 16 --epochs 3 --data  data.yaml  --weights yolov5s.pt

The precision of the model with the epochs set as 3 is very low - less than 0.01 actually; with a tool such as Weights and Biases, which can be set up in a few minutes and has been integrated with YOLOv5, you can find that with --epochs set as 80, the precision gets to be 0.95. But on a CPU machine, you can quickly train a custom model using the command above, then test it in the Android demo app. Below is a sample wandb metrics from 3, 30, and 100 epochs of training:

Step 3. Convert the custom model to lite version

With the export.py modified in step 1 Prepare the model of the section Quick Start, you can convert the new custom model to its TorchScript lite version:

python export.py --weights runs/train/exp/weights/best.pt --include torchscript

The resulting best.torchscript.ptl is located in runs/train/exp/weights, which needs to be copied to the Android demo app's assets folder.


Build and run app

Step 4: Export machine learning model

Now that you've trained your model, you'll need to export it. Download best.torchscript.ptl is located in runs/train/exp/weights and create a label.txt.

Now, get the project from Github. Click the green code button and download ZIP. You will need to click through a number of folders (Vegetable-Detection_App/ObjectDetection/app/src/main/assets/). In assets, copy the label.txt, and best.torchscript.ptl files (you only need to keep one label.txt file) into assets.

Step 5: Build the Android Studio project

Now, copy the file path for the android folder. Open Android Studio and click "Open an existing Android Studio project." A window should pop up that says "Open File or Project." At the top of the window, copy paste the file path and click OK.

Then in PrePostProcessor.java, change line private static int mOutputColumn = 9; to private static int mOutputColumn = no_of_labels+5;.(label.txt defines the 4 custom class names)

Select Build -> Make Project and check that the project builds successfully. You will need Android SDK configured in the settings. You'll need at least SDK version 23. The build.gradle file will prompt you to download any missing libraries.

Step 6: Turn on Android phone developer mode

In our example, we have used a Motorola Moto E4 phone. Go to Settings and scroll to About phone. Scroll down to Build number and click it seven times. After a few taps, the steps should count down until you unlock the developer options. Then, back in Settings, scroll to Developer options and turn Developer mode on. Once developer options are activated, you will see a message that reads, You are now a developer!

Step 7: Install and run the app

Connect the Android device to the computer and be sure to approve any ADB permission prompts that appear on your phone. Select Run -> Run app. Select the deployment target in the connected devices to the device on which the app will be installed. This will install the app on the device. The app should automatically open and it should be able to recognize the objects you trained the model on. If the labels are not showing up, make sure the label.txt file is still in your assets folder.

Assets folder

Do not delete the assets folder content. If you explicitly deleted the files, choose Build -> Rebuild to re-download the deleted model files into the assets folder.

Sources

PyTorch

The model version must be between 3 and 5 But the model version is 7

Contact

For any questions, suggestions, or concerns, please contact me at maitysourab@gmail.com.

About

It's a object detection app. It's detect only πŸ…πŸ§…πŸ₯”πŸ§„. App use .ptl model to detect object. Give a Star 🌟If it helps you.

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages