Skip to content

Latest commit

 

History

History
62 lines (54 loc) · 2.48 KB

DeepLabV3.md

File metadata and controls

62 lines (54 loc) · 2.48 KB

PyTorch-DeepLabV3+

Setup AI Model Efficiency Toolkit (AIMET)

Please install and setup AIMET before proceeding further.

Additional Dependencies

  1. Install pycocotools as follows
sudo -H pip install pycocotools

Model modifications & Experiment Setup

  1. Clone the DeepLabV3+ repo
git clone https://github.com/jfzhang95/pytorch-deeplab-xception.git
cd pytorch-deeplab-xception
git checkout 9135e104a7a51ea9effa9c6676a2fcffe6a6a2e6
  1. Apply the following patch to the above repository
git apply ../aimet-model-zoo/zoo_torch/examples/pytorch-deeplab-xception-zoo.patch
  1. Place modeling directory & dataloaders directory & metrics.py & mypath.py to aimet-model-zoo/zoo_torch/examples/
mv modeling ../aimet-model-zoo/zoo_torch/examples/
mv dataloaders ../aimet-model-zoo/zoo_torch/examples/
mv utils/metrics.py ../aimet-model-zoo/zoo_torch/examples/
mv mypath.py ../aimet-model-zoo/zoo_torch/examples/
  1. Download Optimized DeepLabV3+ checkpoint from the Releases page.
  2. Change data location as located in mypath.py

Obtaining model checkpoint and dataset

Usage

  • To run evaluation with QuantSim in AIMET, use the following
python eval_deeplabv3.py \
        --checkpoint-path   <path to optimized checkpoint directory to load from> \
        --base-size         <base size for Random Crop> \
        --crop-size         <crop size for Random Crop> \
        --num-classes       <number of classes in a dataset> \
        --dataset           <dataset to be used for evaluation> \
        --quant-scheme      <quantization schme to run> \
        --default-output-bw <bitwidth for activation quantization> \
        --default-param-bw  <bitwidth for weight quantization>     		

Quantization Configuration

  • Weight quantization: 8 bits, asymmetric quantization
  • Bias parameters are not quantized
  • Activation quantization: 8 bits, asymmetric quantization
  • Model inputs are not quantized
  • TF_enhanced was used as quantization scheme
  • Data Free Quantization and Quantization aware Training has been performed on the optimized checkpoint