Skip to content
Switch branches/tags

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

Collective Knowledge repository for TVM and VTA

All CK components can be found at and in one GitHub repository!

This project is hosted by the cTuning foundation.

compatibility automation workflow

DOI License


This repository provides high-level, portable and customizable Collective Knowledge workflows for TVM and VTA. It is a part of our long-term community initiative to unify and automate AI, ML and systems R&D using Collective Knowledge Framework (CK), and to collaboratively co-design efficient SW/HW stack for AI/ML during open ACM ReQuEST competitions as described in the ACM ReQuEST report. All benchmarking and optimization results are available in the public CK repository. See CK getting started guide for more details about CK.

Minimal CK installation

The minimal installation requires:

  • Python 2.7 or 3.3+ (limitation is mainly due to unitests)
  • Git command line client.

You can install latest CK via PIP (with sudo on Linux) as follows:

$ sudo pip install ck

You can also install CK in your local user space without sudo as follows:

$ git clone
$ export PATH=$PWD/ck/bin:$PATH

See CK installation procedures for other operating systems here.

CK workflow installation for TVM

Installing CPU version

$ ck pull repo:ck-tvm
$ ck install package --tags=lib,tvm,vcpu,vllvm

Installing GPU (CUDA) version

$ ck pull repo:ck-tvm
$ ck install package --tags=lib,tvm,vcuda,vllvm

Image classification via TVM

We provided a simple example to classify images using MXNet model and TVM. You can test it as follows:

$ ck pull repo:ck-mxnet
$ ck install package:mxnetmodel-mobilenet-1.0

$ ck run program:image-classification-tvm --cmd_key=classify_cpu
$ ck run program:image-classification-tvm --cmd_key=classify_gpu

CK workflow for VTA (deep learning accelerator stack)

We provided CK workflows, packages and programs for VTA (the Versatile Tensor Accelerator)

VTA with a Pynq FPGA board

First, setup your Pynq board as described here. We suggest you to have a fast card of 16GB

Let us consider that the IP of your board is Connect to this board using SSH and "xilinx" for both username and a password:

$ ssh xilinx@

You can then install CK and pull CK repositories with TVM/VTA workflows and some data sets:

$ sudo pip install ck

$ ck pull repo:ck-tvm

You can try to start VTA server via CK:

$ ck run program:tvm-vta-pynq-server --sudo

Note that CK will attempt to automatically detect available compilers, Python and Pynq DMA library, build VTA with TVM run-time and start server on port 9091. CK may occasionally ask you to make a choice when more than one version of a required dependency is found - in most of the cases, you can just press Enter to select the default one.

Now you can setup your host machine. We expect that you already have CK installed. Just pull the same repositories and run image classification example as follows (note that you need to use --env.INIT_PYNQ only once to upload bitstream and reconfigure run-time):

$ ck pull repo:ck-tvm

$ ck run program:image-classification-vta-pynq --env.INIT_PYNQ

You can also specify a different host or port for your FPGA board as follows:

$ ck run program:image-classification-vta-pynq --env.INIT_PYNQ --env.CK_MACHINE_HOST= --env.CK_MACHINE_PORT=9091

CK will also attempt to detect required dependencies (such as LLVM compiler), install missing ones, build TVM for FPGA and will run image classification example. You can then select some image and obtain classification result.

If you encounter issues with this CK workflow, feel free to get in touch with the CK authors using mailing list of Slack channel here.

VTA with a simulator

If you don't have an FPGA board, you can use an integrated simulator on your host machine. You can do it as follows:

$ ck pull repo:ck-tvm

$ ck run program:image-classification-vta-sim

CK will attempt to build a TVM/VTA version with a simulator target, and will perform image classification using this simulator.

CK virtual environment for TVM/VTA

CK support lightweight virtual environment for all packages (automatically setting all necessary environment variables for different versions of different tools natively installed on a user machine).

You can start a virtual environment for a given TVM package as follows:

$ ck virtual env --tags=lib,tvm
> export | grep "CK_"

Related Publications

  author    = {Tianqi Chen and
               Thierry Moreau and
               Ziheng Jiang and
               Haichen Shen and
               Eddie Q. Yan and
               Leyuan Wang and
               Yuwei Hu and
               Luis Ceze and
               Carlos Guestrin and
               Arvind Krishnamurthy},
  title     = {{TVM:} End-to-End Optimization Stack for Deep Learning},
  journal   = {CoRR},
  volume    = {abs/1802.04799},
  year      = {2018},
  url       = {},
  archivePrefix = {arXiv},
  eprint    = {1802.04799},
  timestamp = {Thu, 01 Mar 2018 15:00:45 +0100},
  biburl    = {},
  bibsource = {dblp computer science bibliography,}

  author    = {Grigori Fursin and
               Anton Lokhmotov and
               Dmitry Savenko and
               Eben Upton},
  title     = {A Collective Knowledge workflow for collaborative research into multi-objective
               autotuning and machine learning techniques},
  journal   = {CoRR},
  volume    = {abs/1801.08024},
  year      = {2018},
  url       = {},
  archivePrefix = {arXiv},
  eprint    = {1801.08024},
  timestamp = {Fri, 02 Feb 2018 14:20:25 +0100},
  biburl    = {},
  bibsource = {dblp computer science bibliography,}



Portable and customizable Collective Knowledge workflows for TVM and VTA:




No packages published