Skip to content

Latest commit

 

History

History
executable file
·
50 lines (40 loc) · 2.01 KB

MobileNetV2.md

File metadata and controls

executable file
·
50 lines (40 loc) · 2.01 KB

Mobilenetv2 1.4

Setup AI Model Efficiency Toolkit (AIMET)

Please install and setup AIMET before proceeding further.

Additional Dependencies

Setup TensorFlow Models repo

  • Clone the TensorFlow Models repo
    git clone https://github.com/tensorflow/models.git

  • checkout this commit id:
    git checkout 104488e40bc2e60114ec0212e4e763b08015ef97

  • Append the repo location to your PYTHONPATH with the following:
    export PYTHONPATH=$PYTHONPATH:<path to tensorflow/models repo>/research/slim

Obtaining model checkpoint and dataset

Usage

  • To run evaluation with QuantSim in AIMET, use the following:
python mobilenet_v2_140_quanteval.py \
    --model-name=mobilenet_v2_140 \
    --checkpoint-path=<path to mobilenet_v2_140 checkpoint> \
    --dataset-dir=<path to imagenet validation TFRecords> \
    --quantsim-config-file=<path to config file with symmetric weights>
  • If you are using a model checkpoint which has Batch Norms already folded (such as the optimized model checkpoint), please specify the --ckpt-bn-folded flag:
python mobilenet_v2_140_quanteval.py \
    --model-name=mobilenet_v2_140 \
    --checkpoint-path=<path to mobilenet_v2_140 checkpoint> \
    --dataset-dir=<path to imagenet validation TFRecords> \
    --quantsim-config-file=<path to config file with symmetric weights>
    --ckpt-bn-folded

Quantizer Op Assumptions

In the evaluation script included, we have used the default config file, which configures the quantizer ops with the following assumptions:

  • Weight quantization: 8 bits, asymmetric quantization
  • Bias parameters are not quantized
  • Activation quantization: 8 bits, asymmetric quantization
  • Model inputs are not quantized
  • Operations which shuffle data such as reshape or transpose do not require additional quantizers