Switch branches/tags
Nothing to show
Find file History
abhinavs95 and houseroad Merging new model zoo into existing onnx/models (#75)
* Add files via upload

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Delete base.md

* Update README.md

* Update README.md

* Create backlogs.md

* Create contribute.md

* Update README.md

* Add files via upload

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update contribute.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* image folder

* Update README.md

* Update README.md

* Update README.md

* added models folder

* fixes

* Update and rename backlogs.md to squeezenet.md

* Create resnet.md

* Create vgg.md

* Create mobilenet.md

* Change layout - add model pages

* Update README.md

* Update README.md

* added sqnet train nb

* Update train_notebook_squeezenet.ipynb

* Delete train_notebook_squeezenet.ipynb

* added sqnet train nb

* Update squeezenet.md

* Update squeezenet.md

* Update squeezenet.md

* added test script

* created folders for models

* added extract imagenet script

* Update squeezenet.md

* Update squeezenet.md

* Update squeezenet.md

* Update squeezenet.md

* Update squeezenet.md

* Update squeezenet.md

* Update squeezenet.md

* Rename train_notebook_squeezenet.ipynb to train_squeezenet.ipynb

* Update squeezenet.md

* adding artifacts for resnet

* minor fix

* updated train_squeezenet

* updated imagenet_verify

* Update README.md

* Update README.md

* Update squeezenet.md

* Update squeezenet.md

* Update squeezenet.md

* Update squeezenet.md

* Update squeezenet.md

* Update squeezenet.md

* Update resnet.md

* Update squeezenet.md

* Update squeezenet.md

* Update squeezenet.md

* Update squeezenet.md

* Update squeezenet.md

* Update squeezenet.md

* Update squeezenet.md

* Update squeezenet.md

* Update squeezenet.md

* Update squeezenet.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* resnet training notebook

* fix documentation

* changing name

* renaming squeezenet

* renamed files

* fixing link

* fixed format

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update imagenet_verify.ipynb

* updated train_squeezenet

* updated train_squeezenet

* Update README.md

* added train_vgg

* fix train_vgg

* Update README.md

* Update README.md

* Create imagenet_prep.md

* Update README.md

* Update README.md

* Update README.md

* Update imagenet_prep.md

* Update imagenet_prep.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update contribute.md

* documentation for mobilenet

* add mobilenet training

* fixed link

* fix spelling

* Update README.md

* fixed readme

* fixed imagenet_inference

* added MMS info

* initial feedback

* Update README.md

* Update README.md

* Update README.md

* Update contribute.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* comments and edits on the notebooks and imagenet prep

* added onnx links, fixed CamelCase and add accuracy tables

* fixed formatting

* fixed formatting

* fixed camelcase and contrbution section

* added model description

* added keywords and contributions

* added model server description

* Fixed file

* fixed landing page

* add image placeholders and categories

* fixed model links

* fixed links for real this time

* fixed resnet

* fixed resnet

* updated training and inference notebooks

* fixed models

* fixed models

* added synset

* synset

* removed DS store

* fixed spell

* fixed contri page

* fixed contri page

* fixed contri page

* updated imagenet_verify

* updated extract_imagenet.py, fixed archive sizes, changed name of imagenet_verify, fixed links

* updated target

* version differences

* fixed preprocess

* fixed

* updated readmes, moved folders

* fixed links

* removed DS

* removed DS

* fixed spacing

* push line brreaks

* review commenst

* addressed PR comments

* fixed formatting

* minor updates

* fixed code

* updates after feedback

* fixed links

* Delete .DS_Store

* added PR template

* added model use-cases

* added badges

* Updates after feedback

* fixed links, added imagenet licence info

* formatting

* update wip models

* minor edits

* WaveNet is not based on GAN framework

Rename the heading to Generative Models instead of GAN because WaveNet doesn't involve GAN training framework.

* added license, onnx version, removed MMS, MXNet code

* added pre/post-process scripts, updated PR template, model readme

* typos and grammar fixes

* added sample test data, updated val acc

* fixed formatting

* Update PULL_REQUEST_TEMPLATE.md

* remove README

* update README

* fixed links

* removed keywords, updated notebooks, fixed typos

* added model checker info

* merged ack with ref

* added arcface model

* updated PR temp, image name, imagenet pre/post

* added md5 checksum, opset version

* fixed links

* Update PULL_REQUEST_TEMPLATE.md
Latest commit 4aa95e1 Jul 9, 2018

README.md

VGG

Use cases

VGG models perform image classification - they take images as input and classify the major object in the image into a set of pre-defined classes. They are trained on ImageNet dataset which contains images from 1000 classes. VGG models provide very high accuracies but at the cost of increased model sizes. They are ideal for cases when high accuracy of classification is essential and there are limited constraints on model sizes.

Description

VGG presents the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. VGG networks have increased depth with very small (3 × 3) convolution filters, which showed a significant improvement on the prior-art configurations achieved by pushing the depth to 16–19 weight layers. The work secured the first and the second places in the localisation and classification tracks respectively in ImageNet Challenge 2014. The representations from VGG generalise well to other datasets, where they achieve state-of-the-art results.

Model

The models below are variant of same network with different number of layers and use of batch normalisation. VGG 16 and VGG 19 have 16 and 19 convolutional layers respectively. VGG 16_bn and VGG 19_bn have the same architecture as their original counterparts but with batch normalization applied after each convolutional layer, which leads to better convergence and slightly better accuracies.

Model Download Checksum Download (with sample test data) ONNX version Opset version Top-1 accuracy (%) Top-5 accuracy (%)
VGG 16 527.8 MB MD5 490.0 MB 1.2.1 7 72.62 91.14
VGG 16_bn 527.9 MB MD5 490.2 MB 1.2.1 7 72.71 91.21
VGG 19 548.1 MB MD5 508.5 MB 1.2.1 7 73.72 91.58
VGG 19_bn 548.1 MB MD5 508.6 MB 1.2.1 7 73.83 91.79

Inference

We used MXNet as framework with gluon APIs to perform inference. View the notebook imagenet_inference to understand how to use above models for doing inference. Make sure to specify the appropriate model name in the notebook.

Input

All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (N x 3 x H x W), where N is the batch size, and H and W are expected to be at least 224. The inference was done using jpeg image.

Preprocessing

The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225]. The transformation should preferrably happen at preprocessing. Check imagenet_preprocess.py for code.

Output

The model outputs image scores for each of the 1000 classes of ImageNet.

Postprocessing

The post-processing involves calculating the softmax probablility scores for each class and sorting them to report the most probable classes. Check imagenet_postprocess.py for code.

To do quick inference with the model, check out Model Server.

Dataset

Dataset used for train and validation: ImageNet (ILSVRC2012). Check imagenet_prep for guidelines on preparing the dataset.

Validation accuracy

The accuracies obtained by the models on the validation set are mentioned above. The accuracies have been calculated on center cropped images with a maximum deviation of 0.4% (top-1 accuracy) from the paper.

Training

We used MXNet as framework with gluon APIs to perform training. View the training notebook to understand details for parameters and network for each of the above variants of VGG.

Validation

We used MXNet as framework with gluon APIs to perform validation. Use the notebook imagenet_validation to verify the accuracy of the model on the validation set. Make sure to specify the appropriate model name in the notebook.

References

Contributors

License

Apache 2.0