Skip to content
/ vaiGO Public

vaiGO means Vitis-ai GO. We provide utility and tutorial that make it easy to convert a trained AI model into a bitstream that can be deployed on an FPGA Edge AI Box.

License

Notifications You must be signed in to change notification settings

InnoIPA/vaiGO

Repository files navigation

Vitis-AI GO

See What's New

Introduction

This repository is based on Xilinx Vitis-AI. To provide a way better to use Vitis-AI for AI model converter on FPGA. This repository We integrated Vitis-AI xmodel compile flow and evaluative method in our GO.

Like the following shows, the trained model can be quantize and compile to xmodel. And also deploy on FPGA. How to perform AI inference on an FPGA can be referenced in dpu-sc. On the other hand, visit to find EXMU-X261 usermanual more FPGA information.



The following chart shows the Vitis-AI Model converting flow. To evaluate model can avoid the model spawned some problems during quantize model or compile model. But evaluate model step is optional which means you can choose whether you want to evaluate model.



Matters Needing Attention

  1. Please select the correct version before download the Vitis-AI github. The version is depended on library of the FPGA system.
  2. In compiling model step, the fingerprint which depended on the VAI version and channel parallelism in DPU IP on the FPGA.
  3. Make sure all of your actions were executed in the continer of Vitis-AI because the OS-ENV is different.

Currently Support

Note: Caffe has been deprecated from Vitis-AI 2.5(See also. Vitis-AI user guide). So we won't support caffe in GO.

Support framework depend on Vitis-AI container.

  • Tensorflow1
  • Tensorflow2
  • PyTorch

Support Dataset format

Note: Dataset including training and validation (At least 50 images for each). And make sure the training dataset folder is in the Vitis-AI folder.

  • png
  • jpg
  • jpeg

Support trained model framework

  • Tensorflow
  • PyTorch
  • Darknet

Minimum requirement

Host requirement

  • Ubuntu 18.04
  • Python3, at least 3.5
  • Intel Core i5-12400
  • RAM, at least 32GB

We use the following GPU and Vitis-AI version to test our convert flow.

Vitis-AI 1.4

  • Tesla - T4 16GB or RTX2070 super 12GB
  • Nvidia driver version 470.103.01
  • CUDA Version 10.x

Vitis-AI 2.5

  • RTX3080 12GB
  • Nvidia driver version 470.141.03
  • CUDA Version: 11.x

Models we have tested

Note: We ensure the following models can be quantized and compiled successfully. Some model might be failed during quantizatoion and compilation. Please check Xilinx supported Operations list.(See also. Supported Operations and APIs in Vitis-AI user guide).

Tensorflow1

  • YOLOv3-tiny
  • YOLOv4
  • YOLOv4-tiny

Tensorflow2

  • YOLO4-tiny
  • CNN
  • InceptionNet

PyTorch

  • LPRNet

Getting Started

Reference

Contributor

Author E-mail Corp.
Allen.H allen_huang@innodisk.com innodisk Inc
Hueiru hueiru_chen@inndisk.com innodisk Inc
Wilson wilson_yeh@innodisk.com innodisk Inc
Jack juihung_weng@innodisk.com innodisk Inc

About

vaiGO means Vitis-ai GO. We provide utility and tutorial that make it easy to convert a trained AI model into a bitstream that can be deployed on an FPGA Edge AI Box.

Resources

License

Stars

Watchers

Forks

Packages

No packages published