Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
[ Home ]
This pages requires an update.
Here we list possible CK extension ideas for student internships, Google Summer of Code, collaborative grants, industrial projects, etc. You are welcome to add your own CK-powered projects or ideas here!
- Improve framework for collaborative and reproducible co-design of efficient SW/HW stack for emerging workloads such as deep learning: http://cKnowledge.org/request
- Build public repository of realistic workloads, benchmarks, data sets, tools, predictive models and optimization knowledge in a unified format with the help of the computer engineering community (https://github.com/ctuning/ck-wa, https://github.com/dividiti/ck-caffe , http://github.com/ctuning/ctuning-programs , http://github.com/ctuning/ctuning-datasets-min , https://drive.google.com/folderview?id=0B-wXENVfIO82dzYwaUNIVElxaGc&usp=sharing , http://github.com/ctuning/reproduce-carp-project , http://c-mind.org/repo , http://cknowledge.org/repo)
- Improve universal multi-dimensional, multi-objective, plugin-based autotuning combined with crowdsourcing, predictive analytics and run-time adaptation (to enable self-optimizing computer systems). We support OpenCL, CUDA, OpenMP, MPI, compiler and any other tuning for performance, energy, accuracy, size, reliability, cost and any other metrics across small kernels/codelets and large applications (http://cKnowledge.org/rpi-crowd-tuning, http://github.com/ctuning/ck-autotuning, http://github.com/ctuning/ck-analytics , http://github.com/ctuning/ck-env , http://cknowledge.org/repo/web.php?wcid=29db2248aba45e59:cd11e3a188574d80).
- Implement crowd-benchmarking and crowd-tuning (crowdsourcing workload characterization, optimization and compiler heuristic construction across diverse architectures using shared computational resources such as mobile phones, tablets, cloud services, etc. (http://cKnowledge.org/rpi-crowd-tuning, http://cKnowledge.org/request, http://cknowledge.org/repo )
- Crowdsource compiler/hardware bug detection and fuzzing (OpenCL, OpenGL, etc) - ( http://cKnowledge.org/rpi-crowd-tuning, CK-powered CLSmith example )
- Support artifact evaluation initiatives for major conferences and journals where all artifacts are shared as reusable components (and not just as black box virtual machine images) along with publications (http://cTuning.org/ae , http://www.dagstuhl.de/de/programm/kalender/semhp/?semnr=15452 , http://arxiv.org/abs/1406.4020 , http://adapt-workshop.org)
- Enable open, collaborative and reproducible research and experimentation with interactive publications focusing on computer engineering ( http://cTuning.org/reproducibility-wiki )
- Enable interactive and reproducible articles for Digital Libraries ( http://cknowledge.org/interactive-report , http://cknowledge.org/repo/web.php?wcid=29db2248aba45e59:cd11e3a188574d80 , http://cknowledge.org/repo/web.php?wcid=29db2248aba45e59:6f40bc99c4f7df58 )
- Use CK as a personal knowledge manager to organize, interconnect and preserve all personal coda and data via simple JSON meta with UIDs and semantic tags (see CK-powered CV).
We list many extension ideas as issues in various CK GitHub repositories:
- Universal multi-objective autotuning (program optimization): https://github.com/ctuning/ck-autotuning/issues
- Experiments, graphs, statistical analysis and predictive analytics: https://github.com/ctuning/ck-analytics/issues/
- Packages and environment: https://github.com/ctuning/ck-env/issues
- Interactive/reproducible articles and graphs: https://github.com/ctuning/ck-dissemination-modules/issues/
- Crowdsourcing program optimization and benchmarking: https://github.com/ctuning/ck-crowdtuning/issues
- CK core: https://github.com/ctuning/ck/issues
- Improve/implement OpenCL/CUDA/GCC/LLVM crowd-tuning (see CK crowdtuning repo and crowdtuning results) - add finer-grain tuning; automatic kernel extraction from realistic workloads such as DNN; online clustering of optimizations and optimization prediction using collaborative machine learning (find correct software/hardware/data set features - see CPC'15 paper).
- Add wrappers around GCC/LLVM to predict optimizations on the fly similar to our original MILEPOST GCC (see IJPP'11).
- Add various machine-learning based autotuning exploration scenarios (cover optimization/data set/hardware choices) - see our publications
- Improve CK-based CAFFE crowd-tuning
- Add automatic statistical compiler bug detection (see CK CLSmith repo).
- Add automatic program behavior modeling (performance/energy/scalability) (see paper, GitHub issue #6).
- Add LLVM LNT and GCC benchmarks to CK (see GitHub issue #1).
- Add support for automatic pass selection and reordering in LLVM (see GitHub issue #5).
- Add OpenCL benchmarks to collaborative optimization using mobile phones (see GitHub issue #2).
- Add numerical stability test crowdsourced across many different machines (see CK-based GEMM bench).
- Improve Pareto frontier detection with a fixed minimal set of equally distributed points (see GitHub issue #4).
- Improve statistical analysis of experimental results and speedups (improvements) in math.variation module (see GitHub issue #1).
- Add interactive compilation interface to LLVM to be able to tune/predict fine-grain optimization decisions (see IJPP'11, GitHub issue #6)
- Add function cloning to LLVM (see paper)
- Add run-time adaptation with function cloning based on collected optimization statistics and decision trees (see HiPEAC'05 and CPC'15)
- Enable reproducibility of experimental workflows for Artifact Evaluation by pulling all dependent Git repos with a revision number (see GitHub issue #11).
- Add support for MediaWiki/Drupal for interactive articles (see GitHub issue #1)
- /done/ Re-design package manager and add automatic detection of installed tools and libs with their versions (see GitHub issue #1)
- Add various implementations of popular algorithms implemented using OpenCL, CUDA, OpenMP, MPI, etc together with realistic data sets in CK-format while exposing tuning parameters and measured characteristics. We plan to add and crowd-tune various implementations of Deep Neural Network algorithms, vision applications (such as SLAM and HOG), BLAS, etc. See following shared algorithms in CK format: CK-based GEMM, SLAM, HOG, misc kernels, misc data sets.
- Add DOI support to index all research artifacts shared in CK format (see GitHub issue #47).
- /mostly done/ Add/improve CK internal testing (see GitHub issue #4).
- /mostly done/ Add CK to Debian (see GitHub issue #20).
You are welcome to get in touch with the CK community if you have questions or comments!