v0.8.0
New Additions
- All ImageNet based models now have evaluate.py files: You can now do a full numerical accuracy evaluation on-device through AI Hub.
- Select models now have labels.txt files: With this feature, we are adding the classification labels for models so you can build end to end applications more easily.
- When exporting quantized ONNX models, the inputs and outputs are now quantized automatically by default
Quality Improvements & Bug Fixes
- Performance improvement for the "yolov8seg" and "sesr_m5_quantized" models by changing the output shape to "channel last"
- Align printed memory in export with numbers from hub web page
- Added missing requirements for yolonas and yolonas-quantized
Performance Numbers
- Updated existing numbers to reflect benchmarks from latest AI Hub toolchain
- Updated llama2 numbers for X Elite