Skip to content

First steps

Grigori Fursin edited this page Feb 7, 2019 · 5 revisions

[ Home ]

Demonstrating portable and customizable CK benchmarking workflows

We originally developed CK to help our [partners]( implement modular, portable, customizable and reusable workflows for collaborative and reproducible experimentation such as [deep](

Such workflows automate tedious and repetitive R&D tasks as described in our [FOSDEM'19](

Here is a simple example of how to automatically prepare a program workflow on any platform without the need for containers, i.e. to automatically detect software dependencies, and compile and run programs with multiple datasets in a unified way.

First install CK on Linux, MacOS or Windows as described [here](

Now you can pull CK repo with multiple benchmarks in the CK format.

$ ck pull repo:ctuning-programs

CK will also automatically obtain other CK repos with related workflows and artifacts. You can see them as follows:

$ ck ls repo

You can now see all shared programs in the CK format:

$ ck ls program

You can find and investigate the CK format for a given program (say cbench-automotive-susan) as follows:

$ ck find program:cbench-automotive-susan

It's probably better to see it online with all the sources: .

You can also see a "scary" CK JSON meta description of this entry: .

Now you can try to compile this program:

$ ck compile program:cbench-automotive-susan --speed

CK will invoke function "compile" of a module "program" (you can find source code of this module in the following CK entry "ck find module:program") which will read above meta information and perform some actions.

For example, CK will attempt to automatically detect all installed software dependencies such as compilers and libs. CK uses multiple plugins describing how to detect different software from here: . You can find a list of supported software here.

Extra plugins can be also added by users in their own CK repositories.

You can also perform software detection manually, for example to detect all installed GCC versions:

$ ck detect soft --tags=compiler,gcc

All detected software is registered in the CK with an associated virtual environment similar to Python and Conda but for with a support for any binary installation:

$ ck show env
$ ck show env --tags=compiler,gcc

Now you can run this program as follows:

$ ck run program:cbench-automotive-susan

CK will collect and unify various characteristics (execution time, code size, etc) via JSON API.

This allows one to perform unified benchmarking with multiple executions, monitoring CPU/GPU frequency, performing statistical analysis of empirical results, etc:

$ ck benchmark program:cbench-automotive-susan

Note that CK programs can also take multiple data sets which can be shared by users in different repos (for example, when publishing a new paper)

$ ck search dataset
$ ck search dataset --tags=jpeg

Now users can assemble their own experiments just by reusing such workflows (rather than preparing all this infrastructure).

Note, that if software dependency is not resolved, then we invoke our internal CK package manager to automatically install a given software. You can see available CK packages here.

You can see them from the command line as follows:

$ ck search package --all

For example, you can install the latest LLVM as follows:

$ ck install package --tags=llvm,v6.0.0

Note that an associated CK environment will be also created:

$ ck show env --tags=llvm,v6.0.0
Since all packages are installed in a user space ($HOME/CK-TOOLS) we also implemented a virtual env based on our user feedback and similar to Conda but even for binary installations:
$ ck virtual env --tags=llvm,v6.0.0

In such case, multiple versions of the same tools can easily co-exist in the CK since we automatically set up PATH, LD_LIBRARY_PATH, etc. CK workflows can then easily use specific versions of required tools.

You can also set several virtual environments at once:

$ ck show env
$ ck virtual env {UID1 from above list} {UID2 from above list} ...

Another important CK feature is that all these steps work in the same way across Windows, Linux, MacOS and even Android (you just need to add --target_os=android23-arm64 when installing packages or compiling and running your programs) and automatically supports both Python 2 and 3+.

Now you can try a more complex example with TensorFlow. You should pull a related repository and install CPU-version of TensorFlow via CK:

$ ck pull repo:ck-tensorflow
$ ck install package --tags=lib,tensorflow,vcpu,vprebuilt

Check that it's installed fine:

$ ck show env --tags=lib,tensorflow

You can find a path to a given entry (with TF installation) as follows:

$ ck find env:{env UID from above list}

Run CK virtual environment and test TF:

$ ck virtual env --tags=lib,tensorflow
$ ipython
> import tensorflow as tf

Run CK classification workflow example using installed TF:

$ ck run program:tensorflow --cmd_key=classify

You can even try to rebuild TensorFlow via CK:

$ ck install package:lib-tensorflow-1.7.0-cuda

CK will attempt detect your CUDA compiler and related libs, Java, Basel and will try to rebuild TF. Note that you may still need to install some extra deps yourself:

You can now try to build another AI framework such as Caffe with CUDA support and run classification in a similar way! Note that CK should reuse detected CUDA compilers, libraries and other deps from the previous step, or will attempt to install missing packages:

$ ck pull repo --url=
$ ck install package:lib-caffe-bvlc-master-cuda-universal
$ ck run program:caffe --cmd_key=classify

You can see how to install Caffe for Linux, MacOS, Windows and Android via CK here.

You can even participate in crowd-tuning of some C program (see shared optimization cases in

$ ck pull repo:ck-crowdtuning

$ ck crowdtune program:cbench-automotive-susan

You can also invoke CK from your own Python scripts using one unified access function. For example you can run above program:caffe from a Python script as follows:

import ck.kernel as ck

if r['return']>0: ck.err(r)

print (r)

You can also reuse CK kernel productivity functions which we made portable across Python 2 and 3, and different OS and platforms!

Finally, you can check a complex SW/HW co-design workflow implemented and unified using CK for image classification using deep learning on ARM GPU platforms:

As you may notice, CK is simply a local repository and workflow manager which allows one to share code and data in a customizable, portable and reusable way with a unified CMD/JSON API and meta information. It promotes artifact reuse while gradually substituting and unifying numerous ad-hoc scripts and data structures which easily die after developers leave project.

Find and reuse other shared CK workflows and components:

You can check how above CK workflows and components are used in ACM ReQuEST tournaments to collaboratively co-design SW/HW stack for emerging workloads such as deep learning:

You can also check two other alternative Getting Started Guides 1 and 2.

Please read next guide to learn how to add your own workflows and components!

Questions and comments

You are welcome to get in touch with the CK community if you have questions or comments!

Clone this wiki locally
You can’t perform that action at this time.