Skip to content

shinmura0/Faster-Grad-CAM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

29 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Faster-Grad-CAM

Faster and more precisely than Grad-CAM.

fig1

Upper result is...

  • The size of images is 96-96.
  • We used MobileNet V2 with ArcFace.
  • We measured the processing time on Colaboratory(Tesla P4).

Based paper

Adapting Grad-CAM for Embedding Networks(arXiv, Jan 2020)

We changed below.

  • change Triplet to ArcFace.
  • change k-means clusters(from 50 to 10).

Usage(Janken Demo)

  • Keras 2.2.4
  • TensorFlow 1.9.0
  • sklearn 0.19.0
  • Opencv 3.4.3.18
  • RapberryPi3 modelB(below result) or PC

command below

python3 janken_demo.py

gif1

press [s] to change below mode(like ObjectDetection).

gif2

Method

Detail is here(Japanese). f1 f2 f3 f4

Procedure of training

Look at Train_Faster-Grad-CAM.ipynb

More faster(Raspberry Pi)

  • Change MobileNet V2 to V3. Because V3 is faster than V2 on CPU.
  • Change Raspberry Pi3 to Pi4(or JetsonNANO).
  • Quantization like this.

Application examples

1.Anomaly detection
When you use Self-supervised-learning, anomaly region is visualized by using Faster-Grad-CAM.
Next example is that circle is normal.

fig3

And extra line or missing line is anomaly image.

fig4

Upper result is that only normal images is used in trainging!
Realtime visualization is like below.

fig5

You can do anomaly detection and visualization at the same time.

2.Auto-Annotation
Auto-Annotation is based Grad-CAM and Bayesian optimization.
When you use Faster-Grad-CAM instead of Grad-CAM, you reduce total time by 25%(from 20sec to 15sec).

Special thanks