Skip to content

Latest commit

 

History

History
28 lines (15 loc) · 2.29 KB

README.md

File metadata and controls

28 lines (15 loc) · 2.29 KB

fpgaconvnet-tutorial

Welcome to the start of your fpgaConvNet journey!

fpgaConvNet is an automated toolflow for designing Convolutional Neural Network (CNN) accelerators on FPGAs with state-of-the-art performance and efficiency. It takes an CNN model description (either PyTorch or ONNX) and platform constraints of an FPGA, and produces a bitstream of an accelerator which is optimised for the specific FPGA and model pair. In this repo we will take you through the different aspects of the fpgaConvNet toolflow using examples of interacting with the API, from hardware component modelling all the way to end-to-end model to accelerator compilation.

The toolflow can be used to accelerate a number of applications, including: Image Classification, Object Detection, Segmentation, Human Action Recognition, Key Word Spotting, Anomaly Detection, and etc.

Environment setup can be found at 0: Getting Started.

You are also welcome to try our end-to-end development example here.

Project Structure

tutorial-structure

The fpgaConvNet codebase is split into 4 invididual repositories:

fpgaconvnet-torch, a collection of pre-trained CNN models, providing emulated accuracy results for features such as quantization and sparsity. This repository is optional, if users can provide their own onnx model instead.

fpgaconvnet-model, providing hardware performance and resource modeling, and converting the onnx model to a json-format acclerator configuration.

fpgaconvnet-optimiser, performing Design Space Exploration based on the model predictions to identify the optimal acclerator configuration.

fpgaconvnet-hls, containing hls code templates, translating the identified accelerator configuration into actual source files which can synthesized by Xilinx tools.