Skip to content

TVM, a deep learning compiler stack for CPUs, GPUs and accelerators. This repository presents some tips to setup TVM and deploy neural network models.

Notifications You must be signed in to change notification settings

aquapapaya/InstallTVM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

How to Install TVM

Use pip for installing TVM (tested on Colab)

CPU only

  • pip3 install apache-tvm

CUDA and CPU


Install from source files (tested on Ubuntu 20.04 and above)

Check enviroment

  • Find out installed graphics card by
    • sudo lshw -C display or
    • lspci | grep -i --color 'vga|3d|2d'
  • CUDA toolkit version >= 8.0 is required
    • Use nvidia-smi to check your version
    • Use sudo nvidia-settings to configure NVIDIA graphics driver

Install OpenCL

  • Install OpenCL development files
    • sudo apt install ocl-icd-opencl-dev
  • Install the package of querying OpenCL information
    • sudo apt install clinfo
  • Deploy OpenCL runtime of Intel graphics
    • sudo apt install intel-opencl-icd
    • Check your Intel device with clinfo

Install required libraries

  • g++ 7.1 or higher
  • CMake 3.18 or higher
  • LLVM 4.0 or higher for CPU code generation
    • sudo apt install -y llvm
    • Use llvm-config --version to check your version
  • sudo apt update
  • sudo apt install -y python3 python3-dev python3-setuptools gcc libtinfo-dev zlib1g-dev build-essential cmake vim git

Install Intel oneDNN

Install necessary Python packages

  • sudo apt install -y python3-pip
  • pip3 install --upgrade pip
  • pip3 install numpy decorator attrs

Install optional Python packages

  • pip3 install pillow tensorflow tflite opencv-python easydict typing-extensions psutil scipy tornado cloudpickle
  • Install ONNX packages: pip3 install onnx onnxoptimizer
  • Install ONNX Runtime: pip3 install onnxruntime for CPU; pip3 install onnxruntime-gpu for CUDA

List installed Python packages

  • pip3 list or pip3 freeze
  • Create requirements.txt
    • pip3 freeze > requirements.txt
  • Install Python packages with requirements.txt
    • pip3 install -r requirements.txt

Obtain source files

From release

  • Download *.tar.gz (e.g. apache-tvm-src-v0.15.0.tar.gz) at Release
  • tar zxvf the downloaded *.tar.gz
  • Open a terminal and go to the directory containing decompressed files

From Github

Build your own TVM

  • mkdir build
  • cp cmake/config.cmake build
  • cd build
  • Customize your compilation options
    • vi config.cmake
  • cmake ..
  • make -j4

Set environment variable

  • vi ~/.bashrc
  • Add the following two to ~/.bashrc
    • export TVM_HOME=/path_to_your_own_TVM
    • export PYTHONPATH=$TVM_HOME/python:${PYTHONPATH}
  • source ~/.bashrc

Verify the installed TVM

  • python3 -c "import tvm"

Compile and run pre-trained TFLite model

  • Download a quantized MobileNetV2 from Kaggle Models and extract it
  • Download compile_run_mobilenetv2.py and run python3 compile_run_mobilenetv2.py
  • Expected result: Prediction=> id: 282 name: tabby

Tips on TVM

Export data

  • Show parameter(weight)
    • print(lib.get_params())
  • show all modules generated with relay.build
    • print(lib.get_lib().imported_modules)
  • Print host llvm code
    • print(lib.get_lib().imported_modules[0].get_source())
  • Print device code
    • print(lib.get_lib().imported_modules[1].get_source())
  • Return internal configuration
    • print(lib.get_executor_config())

Install PAPI (Ver. 6 is required for TVM)

  • git clone https://bitbucket.org/icl/papi.git
  • cd papi/src/
  • ./configure --prefix=$PWD/install
  • sudo sh -c "echo 2 > /proc/sys/kernel/perf_event_paranoid"
    • Solve the problem: permission level does not permit operation
  • make && make install
  • cd install/bin
  • ./papi_avail
    • To list available metrics

Install TVM and enable PAPI support

  • git clone --recursive https://github.com/apache/tvm.git
  • cd tvm/
  • mkdir build
  • cd build/
  • cp ../cmake/config.cmake .
  • find [the directory where PAPI is cloned] -name papi.pc
  • vi config.cmake to set: USE_LLVM ON
  • vi config.cmake to set: USE_PAPI [the directory where papi.pc exists]
  • cmake ..
  • make -j4
  • vi ~/.bashrc to set environment variable for TVM
  • source ~/.bashrc

Create Branch from Existing Commit

Create Your Own TVM

  • git clone --recursive https://github.com/apache/incubator-tvm.git
  • cd incubator-tvm
  • git checkout <commit>
  • git checkout -b <new_branch>
  • git remote add <remote_name> <remote_URL>
  • git remote -v
  • git branch
  • git config --global user.email <email>
  • git config --list
  • git push --set-upstream <remote_name> <new_branch>
    • --set-upstream is equal to -u
    • --set-upstream is used in the first upload
    • git push <remote_name> <branch_name> for later upload
  • git tag -l
  • git push --set-upstream --tags <remote_name>

Push Existing Repository to Code Hosting Service

  • cd [directory]
  • git remote add [name for the hosting] [hosting's .git]
  • git push -u [name for the hosting] --all
  • git push -u [name for the hosting] --tags

Push All New Repository to Code Hosting Service

  • cd [directory]
  • git init
  • git remote add [name for the hosting] [hosting's .git]
  • git add .
  • git commit -m "Initial commit"
  • git push -u [name for the hosting] master

Misc.

Upgrade cmake on Ubuntu

  • sudo apt remove cmake
  • pip3 install cmake
  • sudo ln /home/[account_name]/.local/bin/cmake /usr/bin/cmake
  • cmake --version
  • Deploy OpenCL runtime of Intel graphics
    • sudo apt install apt-file
    • sudo apt update
    • apt-file find libOpenCL.so
    • sudo add-apt-repository ppa:intel-opencl/intel-opencl
    • sudo apt update
    • sudo apt install intel-opencl-icd
  • Upgrade graphics driver using Software Updater of Ubuntu
    • Click on the 'Additional Drivers' tab
    • Choose the latest driver provided by Ubuntu

Notification

Reference

About

TVM, a deep learning compiler stack for CPUs, GPUs and accelerators. This repository presents some tips to setup TVM and deploy neural network models.

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages