Skip to content

Prototypical implementation of "Bellamy" for runtime prediction / resource allocation

License

Notifications You must be signed in to change notification settings

dos-group/bellamy-runtime-prediction

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

bellamy-runtime-prediction

Prototypical implementation of "Bellamy" for runtime prediction / resource allocation. Please consider reaching out if you have questions or encounter problems.

Technical Details

Important Packages

All necessary packages will be installed in an environment.

Install Environment

install dependencies with:

conda env [create / update] -f environment.yml

Activate Environment

For activation of the environment, execute:

[source / conda] activate bellamyV1

Hyperparameter Optimization

During Model-Pretraining, we use Ray Tune and Optuna to carry out a search for optimized hyperparameters. As of now, we draw 12 trials using Optuna from a manageable search space. All details are listed in the config.yml and are in most cases directly passed to the respective functions / methods / classes in Ray Tune. For instance, it can be inferred from the config.yml file that we used the ASHAScheduler implemented in Ray Tune to early terminate bad trials, pause trials, clone trials, and alter hyperparameters of a running trial. Also, we used Optuna's default sampler, namely TPESampler, to provide trial suggestions.

Example: Pretraining of Models

Firstly, we need to make sure that a pretrained model is available. In a production environment, this could be assured by a periodically executed job. For instance, run the following command to pretrain a model with the configuration specified in config.yml:

python examples/example-pretraining.py -tt c3o -a grep

This will train a model on the C3O Dataset for the grep-algorithm. The best checkpoint and the fitted pipeline is eventually persisted.

Example: Resource Allocation through Runtime Prediction

After obtaining a pretrained model, we can proceed to use it for predictions. For instance, running

 python examples/example-prediction.py -tt c3o -a grep -r "examples/example-request.yml"

will use an exemplary request containing a min scale-out, a max scale-out and the necessary information for the essential context properties. The pretrained model will be loaded, ideally fine-tuned on data found for this specific configuration, and eventually used for retrieving runtimes. Finally, we report the scale-out with the smallest runtime. This scale-out can then be used for allocating resources accordingly.

Of course, the code can be extended to incorporate other criterions.

About

Prototypical implementation of "Bellamy" for runtime prediction / resource allocation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published