Skip to content

nnUyi/MobileNet_V2

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

35 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MobileNet_V2

Configuration

  • GPU: Geforce GTX 1080Ti
  • ubuntu16.04

Requirements

  • tensorflow >= 1.0
  • python 2.7.*
  • numpy
  • scipy
  • cPickle
  • Pillow

Notes: Python2 is default. If you have python3.*, cifar10.py, cache.py dataset.py should be replaced by files stored in python3 folder. Any problems, you can email me!!!

Repo Structure

The following structure show the main frame of this repo.

  MobileNet_V2
  |———— data/                                 # store cifar10 dataset
          |———— cifar10/
  |———— main.py                               # repo entry
  |———— MobileNet_V2.py                       # mobilenet class
  |———— utils.py                              # generate datasource
  |———— cifar10.py                            # cifar10.py, cache.py dataset.py  for cifar10 reading
  |———— cache.py
  |———— datatset.py
  
  # if you want to use your own datasets, add your datasets type in line 38 in utils.py. 
  # Images are [input_height, input_width, input_channel] formats and labels are one_hot encoding formats.

Usages

Download Repo

$ git clone https://github.com/nnuyi/MobileNet_V2.git
$ cd MobileNet_V2

Datasets

In this repo, since the computation, I mainly focus on CIFAR10 datasets.

  • CIFAR10: You are required to download CIFAR10 datasets here, unzip it and store it in './data/cifar10/' , note that CIFAR-10 python version is required. You can unzip it in './data/cifar10/' using the following command:

    $ tar -zxvf cifar-10-python.tar.gz
    # you will see that data_batch_* are stored in './data/cifar10/cifar-10-batches-py/'
    

Training

  $ python main.py --batchsize=128 \
                   --is_training=True \
                   --is_testing=False \ 
                   --datasets=cifar10 \
                   --input_height=32 \
                   --input_width=32 \
                   --input_channels=3 \
                   --num_class=10
  
  # If GPU options is avaiable, you can use it as the instruction shows below:
  $ CUDA_VISIBLE_DEVICES=[no] \
    python main.py --batchsize=128 \
                   --is_training=True \
                   --is_testing=False \ 
                   --datasets=cifar10 \
                   --input_height=32 \
                   --input_width=32 \
                   --input_channels=3 \
                   --num_class=10
  
  # notes: [no] is the device number of GPU, you can set it according to you machine
  $ CUDA_VISIBLE_DEVICES=0 \
    python main.py --batchsize=128 \
                   --is_training=True \
                   --is_testing=False \ 
                   --datasets=cifar10 \
                   --input_height=32 \
                   --input_width=32 \
                   --input_channels=3 \
                   --num_class=10

Results

  • After training, you can see that the testing accuracy rate can reach to 89.3%.

  • loss function shows below:

TODO

  • Continute to fine-tuning hyperparameters to improve its accuracy!!!
  • Train in cifar100
  • Train in Caltech101

References

Contact

Email: computerscienceyyz@163.com

Releases

No releases published

Packages

 
 
 

Languages