Droplet Search is a technique to optimize machine learning kernels, based on the coordinate descent algorithm. This algorithm is currently part of Apache TVM (For more details, see this PR). To know more about it, you can take a look into this paper.
Droplet is merged on Apache TVM since version 0.13.0. This repository is used as an artifact for the paper.
In this section are the steps to reproduce our experiments.
You need to install the following packages to run this project:
- Docker and Docker Compose to run our experiments
- Python-3 to plot the results in the project's Jupyter Notebook
For nvidia docker, please follow these instructions: Nvidia Container
We developed a dockerfile with the experiments and all requirements installed. We recommend using this solution if you want to compare it with our solution. Below, for each architecture supported, is presented how to build the docker.
bash scripts/build_docker.sh <ARCH>
Where <ARCH>
can be x86, arm, or cuda.
You can run the docker following command line:
bash scripts/run_docker.sh <ARCH>
Where <ARCH>
can be x86, arm, or cuda.
To execute the neural networks models (Figure 11):
bash scripts/cnn_models.sh <ARCH>
To measure the impact of the p-value in the droplet (Figure 12):
bash scripts/droplet_pvalue.sh <ARCH>
To execute microkernels (Appendix), you must use the following script:
bash scripts/microkernels.sh
The repository has the following organization:
|-- results: "Place which your data will be saved for the default"
|-- docker: "Scripts for building the docker"
|-- docs: "Repository documentation"
|-- scripts: "Scripts for building the docker and generating some images"
|-- src: "Source code"
|-- handmade: "Extra experiments using the droplet to verify how the space search works"
|-- microkernels: "Python scripts to run microkernel presents in the paper"
|-- tvm: "Python scripts to run NN models presented in the paper"
|-- thirdparty: "Third-party code for comparison with our experiments."