Portable workflows

Grigori Fursin edited this page Jul 17, 2018 · 67 revisions

[ Home ]

Here we describe how we enable portable, customizable and reusable CK workflows (real workloads, benchmarks, etc) based on

  • shared software detection plugins with a unified JSON API to automatically detect already installed software;
  • shared CK packages to automatically install, rebuild and mix packages with already installed software detected above while supporting Linux, Windows, MacOS and Android with the same command line/JSON API. Note that CK package manager can be used as a unified and high-level front-end for scons, cmake, make, EasyBuild, Spack, etc;
  • CK virtual environment similar to Python but with a unified JSON API and supporting any software and binary installations.

Such approach helps create portable, customizable and reusable benchmarks or realistic workloads. It is currently used in ACM ReQuEST tournaments to co-design efficient SW/HW stack for emerging workloads such as deep learning (see ACM ReQuEST-ASPLOS'18 proceedings and ReQuEST'18 report), to crowdsource autotuning, and to automate artifact evaluation at systems, ML and AI conferences including CGO, PPoPP, SuperComputing and SysML.

You can get the first feeling about portable CK workflows, software detection and package installation by starting from this simple tutorial.

Introduction: fighting SW/HW chaos

Since 2008, we spend most of our research time not on prototyping and validating ideas, but on fighting with continuously changing software and hardware stack for our projects (see motivation paper and slides).

Furthermore, we had a strong need to validate our research techniques on autotuning, crowd-tuning, machine-learning across multiple and often co-existing versions of compilers, libraries, tools, benchmarks and data sets.

Since there was no technology to implement portable experimental workflows and reproduce results, we and our fellow colleagues often had to develop some ad-hoc script and solutions for each project often with hardwired names, paths and versions of software dependencies and data sets.

Such "hacking" is enough to quickly validate a few ideas, but can not support long-term, sustainable, collaborative and reproducible research projects.

For example, we must solve them to enable our collaborative approach to crowdsource benchmarking, optimization and co-design of efficient SW/HW stack for emerging workloads such as deep learning (see GCC Summit'09, ArXiv'13, JSC'14, DATE'16).

Unfortunately, we still face these problems during our artifact evaluation initiative to reproduce results from accepted papers at the leading computer systems conferences including PPoPP, CGO and SuperComputing - our reviewers spend more time on setting up and running experiments in continuously changing environment rather than validating and analyzing results (see slides with our AE experience at CGO'17 and PPoPP'17).

Docker and Virtual Machines may partially solve this problem by creating an image (snapshot) of a user environment with all dependencies. Such images can be shared in a workgroup to confirm research idea but they are hiding all the SW/HW chaos underneath rather than solving it up.

Furthermore, they are not portable on new architectures with new features, become quickly outdated for computer systems research, and do not provide support to customize research workflows and adapt them to latest native environments, with new tools, different libraries, benchmarks and data sets, and possibly even a different OS and architecture.

Cmake can help users automatically detect software dependencies and build a project in a user environment. However, cmake tries to be too smart by selecting the most suitable version of a required software dependency and then produces a fixed Makefile for a given project - this is useful for an end-user but not systems researchers.

Indeed, HPC and systems researchers need flexible experimental workflows to be able to easily plug in multiple, possibly co-existing software and their versions such as different and continuously evolving compilers (GCC, LLVM, ICC, PGI), analytics tools (scipy, old R versus new R), libraries (CUDA, MKL, DNN frameworks, cuBLAS, OpenBLAS, CLBlas)! They may also want to use already installed software with just a few packages rebuilt for a given environment (for example using Spack or EasyBuild tools).

Collective Knowledge complete approach

At the end of 2011 we realized that we don't have a choice but to develop a complete workflow system with an integrated package manager and a virtual environment to solve all above problems in one go. We first started experimenting with portable workflows and an integrated package manager within our Collective Mind framework (2012-2014) - now deprecated. Eventually, we used this experience and user feedback to create a new, simple, portable and command line based workflow framework, called Collective Knowledge (2014-cur.) which solved most of above problems at least for our own research and for R&D of our partners.

We provided 4 abstractions (modules) in the CK to implement above functionality in the ck-env repository:

  • Module "os" to describe properties of different operating systems required to create portable packages and workflows. You can see shared OS entries here. You can also see example of JSON meta description of OS entries for generic Linux, generic Windows, generic MacOS and recent Android 28 for ARM64.
  • Module "soft" to automatically detect different software (tools, data sets, libraries) using CK plugins shared by the community, and to prepare virtual CK environment. See examples of such plugins here. Note that such plugins are decentralized and can be shared in other public or private repositories!
  • Module "env" to enable virtual environment and co-existence of different versions of different tools installed in system or user space at the same time (possibly by different build systems such as cmake, spack, EasyBuild, scons, etc). See this tutorial with usage examples!
  • Module "package" to provide a unified abstraction for different package build systems (direct binary download, make, cmake, scons, EasyBuild, spack, etc) across Linux, Windows, MacOS and Android (all OS from above), and to register virtual CK environment. See list of already shared packages. Note that such packages are also decentralized and can be shared in other public or private repositories!

Researchers can now quickly assemble portable and customizable workflows for benchmarks, real workloads, research scenarios, etc. using module "program" and specifying tags or version ranges of required software from above in the JSON meta for compilation (if applicable) and execution as shown in this cbench-automotive-susan example.

 "compile_deps": {
    "compiler": {
      "local": "yes", "name": "C compiler", "sort": 10, "tags": "compiler,lang-c"
    "xopenme": {
      "local": "yes", "name": "xOpenME library", "sort": 20, "tags": "lib,xopenme"

When compiling and running this program, CK will first attempt to detect already installed software (C compiler, and xOpenME library) and register virtual environment. If none exists, CK will then attempt to search for CK packages with the same tags and automatically install them (for example downloading binary distribution of LLVM compilers or even rebuilding them if needed):

$ ck pull repo:ck-crowdtuning

$ ck compile program:cbench-automotive-susan --speed
$ ck run program:cbench-automotive-susan

$ ck run program:cbench-automotive-susan --target_os=android21-arm64

Note that now you can see all virtual environments automatically registered in the CK:

$ ck show env
$ ck show env --tags=lang-c
$ ck show env --target_os=android21-arm64

Note that each CK "env" entry has automatically generated "env.sh" (Linux, MacOS) or "env.bat" (Windows) batch file which pre-sets environment for a given version of a given tool on a give host and for a given target - these batch files are prepared by "customize.py" inside related "soft" plugins. The community gradually extends them to add/expose more environment variables for a given tool.

Whenever you compile and run the same program for the same target again, CK will first detect related virtual environments, check that they are not outdated (for example, if they were prepared for some system tool such as GCC or LLVM, and that they didn't change during system updates), and will reuse it! Furthermore, any other workflow will also reuse above virtual environments. If more than one virtual environment is found for a given dependency, CK will also ask user to select one!

Thus, whenever a user runs a portable CK workflows from another user, CK will automatically adapt this workflow to a user environment while automatically rebuilding missing dependencies!

Users can also use above virtual environment for different version of different tools from command line similar to Python virtual env as follows:

$ ck virtual env --tags=lang-c
$ ck virtual env:{UID from "ck show env"}
$ ck virtual env {UID1 from "ck show env"} {UID2 from "ck show env"} {UID3 from "ck show env"}

It is very convenient to test different versions of a given compiler for example.

You can test it from command line as follows:

$ ck virtual env --tags=lang-c --shell_cmd="gcc --version"
$ ck virtual env --tags=lang-c --shell_cmd="clang --version"
$ ck virtual env --tags=lang-c --shell_cmd="icc --version"
$ ck virtual env --tags=lang-c --shell_cmd="cl /help"

You can also install a given package explicitly using this list. For example, you can install binary LLVM for your host OS including Windows as follows:

$ ck install package:compiler-llvm-6.0.0-universal 
or you can rebuild it on your host (Linux, Windows, MacOS) using cmake as follows:
$ ck install package:compiler-llvm-trunk
or you can rebuild it using spack as follows (note that it works only on Linux and spack will download and install many sub-packages and it can take a very long time):
$ ck install package:compiler-llvm-5.0.1-spack-linux

You can then use all above installation with CK program workflows.

Note, that CK installs packages to the $HOME/CK-TOOLS directory on Linux or %USERPROFILE%\CK-TOOLS on Windows. You can remove associated virtual environments without deleting installation as follows:

$ ck rm env:{UID}
$ ck rm env:* --tags=lang-c

You can remove virtual environment with its installation using the following command:

$ ck clean env:{UID}
$ ck clean env:* --tags=lang-c

We also have a new mode to let you install packages inside virtual environment entries. You can set it up as follows:

 $ ck setup kernel --var.install_to_env=yes

In such case, you can delete virtual environment with the installation simply as follows:

$ ck rm env:{UID}

If you want, you can turn this mode off later as follows:

 $ ck setup kernel --var.install_to_env=no

Note that when package installation starts but not yet completes installation, CK adds 'tmp' tag to a newly created virtual "env" entry. This means that if package installation failed, you will still have a temporal env entry which can be easily deleted as follows (unless you want to debug it):

 $ ck rm env:* --tags=tmp -f

Another important CK feature integrated with portable workflows is collection and sharing of diverse platform properties in a unified way. It is done using a collection of "platform*" modules:

 $ ck ls module:platform* | sort

For example, you can detect all platform properties as follows:

 $ ck detect platform

You can also share them with the community for collaborative benchmarking and experimentation as follows:

 $ ck detect platform --exchange

You can see already collected information about different platforms at the live CK repository or in reusable CK format in this repository.

You can now look at more complex software dependencies for image classification using TensorFlow, Caffe, MXNet CK packages and software detection plugins in the CK workflows from the 1st ACM ReQuEST tournament of co-designing Pareto-efficient SW/HW stack for deep learning:

You can also have a look at how such CK-powered portable workflows were used in a CGO'17 distinguished artifact which can run on Linux and Android while automatically downloading, installing and rebuilding all software dependencies (including LLVM compiler):

If you would like to know more details, please check how to add new workflows, software plugins and packages, Getting Started Guides, CK publications, the latest ACM ReQuEST report and our ResCuE-HPC at SuperComputing'18 workshop!

Questions and comments

If you have questions, comments or problems, don't hesitate to get in touch with the CK community via mailing lists, emails or a dedicated Slack channel - we always help our partners and end-users to use CK and/or add new software detection plugins, packages and portable workflows!

Clone this wiki locally
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
Press h to open a hovercard with more details.