Skip to content

Demonstration of compiler autotuning, crowd-tuning and machine learning on RPi3 via customizable Collective Knowledge workflow framework with a portable package manager. This technology supports Pareto-efficient software/hardware co-design tournaments of deep learning in terms of speed, accuracy, energy, costs:

master
Go to file
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
.cm
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

README.md

compatibility automation workflow

DOI License: CC BY 4.0

Introduction

Optimization results to demonstrate compiler autotuning, crowd-tuning and machine learning on RPi3 via customizable Collective Knowledge workflow framework with a portable package manager.

License

  • CC BY 4.0

Copyright

  • 2015-2018 (ñ) cTuning foundation and volunteers

Prerequisites

  • Collective Knowledge framework (@GitHub)
  • Python 2.7 or 3.3+
  • Python PIP
  • Git client

Minimal CK installation

The minimal installation requires:

  • Python 2.7 or 3.3+ (limitation is mainly due to unitests)
  • Git command line client.

You can install CK in your local user space as follows:

$ git clone http://github.com/ctuning/ck
$ export PATH=$PWD/ck/bin:$PATH
$ export PYTHONPATH=$PWD/ck:$PYTHONPATH

You can also install CK via PIP with sudo to avoid setting up environment variables yourself:

$ sudo pip install ck

CK repository installation

Install this CK repository:

 $ ck pull repo --url=https://github.com/dividiti/ck-rpi-optimization

Update all CK repositories at any time

 $ ck pull all

Check out report and see related scripts in the following entries:

 $ ck ls script:rpi3-*

For example, you can see individual scripts we used to prepare, run and reproduce autotuning experiments via CK for susan corners benchmark in the following entry:

 $ cd `ck find:scriptrpi3-susan-autotune`
 $ ls

Two CK repositories with additional experimental results in a reproducible form are available at FigShare:

You can download and install them directly via CK as follows (note that each zip is around 150Mb archived and ~1-1.5GB unzipped):

 $ ck add repo:ck-rpi-optimization-results-reactions --zip=https://ndownloader.figshare.com/files/10218435 --quiet
 $ ck add repo:ck-rpi-optimization-results-reactions-multiple-datasets --zip=https://ndownloader.figshare.com/files/10218441 --quiet
 $ ck ls experiment:rpi3-*

We continue gradually documenting all scripts in above entry together with the community - your help is appreciated. Feel free to get in touch with the community via CK mailing list:

Next steps:

  • We plan to use reproducible optimization methodology prototyped here to support Pareto-efficient co-design competitions of the whole software and hardware stack for emerging workloads such as deep learning in terms of speed, accuracy, energy and costs: http://cKnowledge.org/request

Notes

We could not build GCC 7.1.0 for RPi3 via CK with Graphite support (outdated libraries and missing deps). This may reduce optimization possibilities during autotuning:

gcc -c    -I../ -DCK_HOST_OS_NAME2_LINUX=1 -DCK_HOST_OS_NAME_LINUX=1 -DCK_TARGET_OS_NAME2_LINUX=1 -DCK_TARGET_OS_NAME_LINUX=1 -DXOPENME -I/home/fursin/CK-TOOLS/lib-rtl-xopenme-0.3-gcc-4.9.2-linux-32/include -O3 -fcaller-saves -fcse-follow-jumps -fgcse-lm -fno-gcse-sm -fira-share-save-slots -fno-ira-share-spill-slots -floop-interchange -flto -fmodulo-sched-allow-regmoves -fpeephole -fsched-spec -freciprocal-math -fno-sched-spec-load-dangerous -fselective-scheduling2 -fsel-sched-pipelining-outer-loops -fsignaling-nans -fsplit-ivs-in-unroller -ftree-dominator-opts -fno-tree-fre -ftree-loop-distribute-patterns -ftree-ter ../adler32.c  -o adler32.o
../adler32.c:1:0: 

sorry, unimplemented: Graphite loop optimizations cannot be used
(isl is not available) (-fgraphite, -fgraphite-identity,
-floop-nest-optimize, -floop-parallelize-all)

About

Demonstration of compiler autotuning, crowd-tuning and machine learning on RPi3 via customizable Collective Knowledge workflow framework with a portable package manager. This technology supports Pareto-efficient software/hardware co-design tournaments of deep learning in terms of speed, accuracy, energy, costs:

Resources

License

Releases

No releases published

Packages

No packages published