Skip to content

Latest commit

 

History

History

convnext_tiny_w8a16_quantized

Qualcomm® AI Hub Models

ConvNextTiny is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.

This is based on the implementation of ConvNext-Tiny-w8a16-Quantized found here. This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found here.

Sign up for early access to run these models on a hosted Qualcomm® device.

Example & Usage

Install the package via pip:

pip install "qai_hub_models[convnext_tiny_w8a16_quantized]"

Once installed, run the following simple CLI demo:

python -m qai_hub_models.models.convnext_tiny_w8a16_quantized.demo

More details on the CLI tool can be found with the --help option. See demo.py for sample usage of the model including pre/post processing scripts. Please refer to our general instructions on using models for more usage instructions.

Export for on-device deployment

This repository contains export scripts that produce a model optimized for on-device deployment. This can be run as follows:

python -m qai_hub_models.models.convnext_tiny_w8a16_quantized.export

Additional options are documented with the --help option. Note that the above script requires access to Deployment instructions for Qualcomm® AI Hub.

License

  • The license for the original implementation of ConvNext-Tiny-w8a16-Quantized can be found here.
  • The license for the compiled assets for on-device deployment can be found here

References

Community