Demo ARM TechCon'16
Clone this wiki locally
[ Home ]
This demo accompanies the joint talk "Know Your Workloads: design more efficient systems!" by Ed Plowman (Director of Performance Analysis Strategy, SSG ARM) and Grigori Fursin (CTO of dividiti and CSO of the non-profit cTuning foundation) at ARM TechCon'16 with the following message to the academic and industrial community:
- Increasing Bio-diversity in industry & academia!
- In other words - Use real content/apps not benchmarks!
- As we saw earlier the mapping between benchmarks and real apps is unreliable
- If you are tempted to write a benchmark DON’T!
- The world needs another benchmark like a hole in the head!
- Instead use the time you would have spent contributing to useful projects like..
Workload Automation (https://github.com/ARM-software/workload-automation)
- Provides a way to setup, document and automate workloads for performance experiments
- Allows for largely automated execution of applications (this is constantly improving)
- ARM will be basing most of its future performance analysis around this initiative
Collective Knowledge (http://cKnowledge.org ; https://github.com/ctuning/ck-wa)
- Provides a way to setup, document and automate workflows for performance experiments
- Crowdsources benchmarking and tuning; applies Data Analytics; easily extensible by the community
Designing and optimizing computer systems in terms of performance, energy consumption, accuracy, reliability and other metrics has become extremely complex and costly due to an ever growing number of design and optimization choices, constantly changing SW and HW, lack of common performance analysis and optimization methodology, and lack of common ways to reuse and grow optimization knowledge. Failing to optimize properly, however, leads to overly-expensive, under-performing and energy-hungry systems. Very soon there won't be any other way to develop highly efficient systems than by leveraging community effort.
Together with the community, we are developing Workload Knowledge - an open framework for gathering, sharing reproducing, and reusing knowledge about system design and optimization using real-world workloads. Powered by 3 open-source projects (ARM's Workload Automation, cTuning's Collective Knowledge and Notebooks), Workload Knowledge will dramatically accelerate innovation in computer engineering and lead to designing highly efficient systems.
ARM's Workload Automation automates all kind of execution and measurement collection on diverse ARM devices.
Collective Knowledge framework and repository enables collaborative and reproducible empirical experimentation via unified artifact sharing (as Python components with simple JSON API and JSON meta description), portable experimental workflows, experiment crowdsourcing, statistical analysis of empirical results, extensible multi-objective autotuning, predictive analytics and interactive reports. It serves as the front-end for ARM WA.
Jupyter Notebooks help analyze and disseminate results in a convenient, interactive way.
We hope that in a long term combination of these tools will dramatically simplify workload benchmarking, tuning and adaptation across diverse hardware and environments, and involve the community to collaborative optimize workloads and share optimization knowledge via public or private repositories (DATE'16, CPC'15, JSP'14). Furthermore, this practical approach helped us connect industry and academia via , boost innovation, and considerably reduce time to market for the new efficient systems.
You can try our "one-button" approach and participate in crowd-benchmarking and crowd-tuning of shared workloads by following these guidelines.
You can also check our crowd-benchmarking of a popular Caffe Deep Learning framework here. Note that related Android application will be publicly released at the end of October.
You can see current optimization results from such crowdsourced benchmarking and optimization scenarios in the live CK repository.
Note, that this is an on-going and quickly evolving project in a beta state, so some glitches are possible! However, we decided to open it now to let the community participate in all further developments and discussions via this mailing list. You can also report issues at GitHub.
Finally, you can participate in crowd-benchmarking and crowd-tuning using commodity mobile phones via two Android apps:
- http://cknowledge.org/repo - just choose "crowd-benchmark shared workloads via ARM WA framework" scenario, or Caffe crowd-benchmarking!
CK allows universal, extensible, collaborative, multi-dimensional and multi-objective autotuning.
We are developing a common methodology for artifact sharing and evaluation at the leading conferences and workshops. We hope this will help connect industry and academia where researchers will be using real workloads rather than quickly outdated and possibly unrepresentative benchmarks!
You can download all above references in the BibTex format here.
You are welcome to get in touch with the CK community if you have questions or comments!