Acuity model zoo contains a set of popular neural-network models created or converted (from Caffe, Tensorflow, PyTorch, TFLite, DarkNet or ONNX) by Acuity toolkits.
Acuity uses JSON format to describe a neural-network model, and we provide an online model viewer to help visualized data flow graphs. The model viewer is part of netron since 4.6.8.
- Alexnet(OriginModel)
- Inception-v1 (OriginModel)
- Inception-v2 (OriginModel)
- Inception-v3 (OriginModel)
- Inception-v4 (OriginModel)
- Mobilenet-v1 (OriginModel)
- Mobilenet-v2 (OriginModel)
- Mobilenet-v3 (OriginModel)
- EfficientNet (OriginModel)
- EfficientNet (EdgeTPU) (OriginModel)
- Nasnet-Large (OriginModel)
- Nasnet-Mobile (OriginModel)
- Resnet-50 (OriginModel)
- Resnext-50 (OriginModel)
- Senet-50 (OriginModel)
- Squeezenet (OriginModel)
- VGG-16 (OriginModel)
- Xception (OriginModel)
- DenseNet (OriginModel)
- Faster-RCNN-ZF (OriginModel)
- Mobilenet-SSD (OriginModel)
- Mobilenet-SSD-FPN (OriginModel)
- MTCNN PNet(OriginModel) RNet(OriginModel) ONet(OriginModel) LNet(OriginModel)
- SSD (OriginModel)
- Tiny-YOLO (OriginModel)
- YOLO-v1 (OriginModel)
- YOLO-v2 (OriginModel)
- YOLO-v3 (OriginModel)
- YOLO-v4 (OriginModel)
- YOLO-v5 (OriginModel)
- FCOS (OriginModel)
- SRCNN (OriginModel)
- VDSR (OriginModel)
- EDSR_x2 (OriginModel)
- EDSR_x3 (OriginModel)
- EDSR_x4 (OriginModel)
- ESRGAN (OriginModel)
- QuartzNet (OriginModel)
- DPRNN (OriginModel)
- RNNOISE (OriginModel)
- Speaker Verification (OriginModel)
- DS_CNN (OriginModel)
Acuity is a python based neural-network framework built on top of Tensorflow, it provides a set of easy to use high level layer API as well as infrastructure for optimizing neural networks for deployment on Vivante Neural Network Processor IP powered hardware platforms. Going from a pre-trained model to hardware inferencing can be as simple as 3 automated steps.
-
Importing from popular frameworks such as Tensorflow and PyTorch
AcuityNet natively supports Caffe, Tensorflow, PyTorch, ONNX, TFLite, DarkNet, and Keras imports, it can also be expanded to support other NN frameworks.
-
Fixed Point Quantization
AcuityNet provides accurate Post Training Quantization and produces accuracy numbers before and after quantization for comparison. Advanced techniques are built-in into AcuityNet quantizer, such as KL-Divergence, Weight Equalization, Hybrid Quantization, Per Channel Quantization, etc.
-
Graph Optimization
Neural-network graph optimization is performed to reduce graph complexity for inference, such as Layer Fusion, Layer Removal and Layer Swapping
-
Tensor Pruning
Pruning neural networks tensors to remove ineffective synapses and neurons to create sparse matrix
-
Training and Validation
AcuityNet provides capability to train and validate Neural Networks
-
Inference Code Generator
Generates OpenVX Neural Network inference code which can run on any OpenVX enabled platforms
Vivante NPUIP is a highly scalable and programmable neural network processor that supports a wide range of Machine Learning applications. It has been deployed in many fields to accelerate ML algorithms for AI-vision, AI-voice, AI-pixel and other special use cases. Vivante NPUIP offers high performance MAC engine as well as flexible programmable capability to adopt new operations and networks without having to fall back to CPU. Today, over 120 operators are supported and continue to grow.
Mature software stack and complete solutions are provided to customers for easy integration and fast time to market.
Tooling
- Acuity Toolkits
- Acuity IDE
Runtime software stack support
- OpenVX and OpenVX NN Extension
- OpenCL
- Android NN API
- TFLite NPU Delegate
- ONNX Runtime Execution Provider
- ARMNN Backend