Skip to content

EscVM/RPS_with_Edge_TPU

master
Switch branches/tags
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
bin
 
 
 
 
 
 
 
 

~ Rock - Paper - Scissor on Edge TPUs ~

You know when you have some important stuff to do, but you try so hard to find excuses and other things to do to postpone everything? Today, 28/01/2020, it's one of those days :)) So, I really wanted to loose some time and explore a little bit these new edge AI devices made by Google that are in my office.
In order to check their performance, I trained a very simple and dumb CNN (feel free to improve it) on the Rock-Paper-Scissor dataset and I made it run on a Coral Dev Board, a Raspberry 4 with the USB Coral Accelerator (connected to USB 3 port) and Raspberry 3 with the USB Coral Accelerator (USB 2 port). Note that the results on the USB Accelerator have been achieved installing the Edge TPU runtime with maximum operating frequency. The inference time is the mean value over 1000 predictions.
These are my results:

Device Inference Time [ms] FPS
Coral Dev Board 1.3 747
Raspi 4 + USB Accelerator (USB 3) 1.4 710
Raspi 3 + USB Accelerator (USB 2) 5.0 200

Flow_chart of the recognition proces

Side Note fsalv will be or is (depending on when you are reading) a contributor of this repository. He wants to improve it and make it ukulele friendly.

1.0 Getting Started

Clone this repository

git clone https://github.com/EscVM/RPS_with_Edge_TPU

Python3 is required. I used TensorFlow 2.x for the training, but I uploaded also all converted and original weights. So, if you don't want to re-train the network you can simply use the inference code.

1.1 Installations for the hosting device

Install on the hosting device the following libraries to make the inference code work :

  • opencv-python. N.B. We installed OpenCV4.0 on the Dev Board using this guide as reference.
  • numpy
  • TensorFlow Lite Interpreter. If you're using the Coral USB Accelerator with the Raspberry download ARM32.
    N.B. If you are using the Dev Board, both the Interpreter and the EdgeTPU Compiler are already installed during OS flashing.

2.0 Run the Interpreter

Open your terminal in the project folder and launch:

python3 rpc_webcam.py

Enjoy the network predicting the shape of your beautifull hands :)

3.0 Improve the CNN Network

As I already wrote in the introduction, I made this project very quickly to check the performance of my two Coral devices. So, I didn't spend time building a cool network. If you want to improve the CNN structure or using transfer learning to retrain your prefered architecture, in the project folder you can find the two jupyter notebook I used to train the network and convert from TensorFlow to TFLite. Then you have to use the TPU compiler to make your TFLite file TPU compatible. It's a long, but not difficult process. Here you can find a beutifull summary of the entire chain.

About

Rock-Paper-Scissor on devices with Edge TPUs

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published