Skip to content

mudita11/Tune-AOS-bbob

Repository files navigation

Generic-AOS-framework

Adaptive Operator Selection (AOS) method consists of multiple components such as reward, quality etc. The AOS methods from literature is generalised by identifying these components. Each component is presented with a number of alternative choices, each represented with a formula. IRACE is setup to select the best combination of components and tune their hyper parameters. Refer to https://arxiv.org/pdf/2005.05613.pdf for full details on the design of the framework.

Installation

cocoex, taken from (https://numbbo.github.io/coco-doc/)

git clone https://github.com/numbbo/coco.git # get coco using git

cd coco

python3 do.py run-python install-user # install Python experimental module cocoex

python3 do.py install-postprocessing install-user # install post-processing

Pygmo pip3 install --user pygmo

Check command irace --check

Description

default_parameter_setting This folder contains default parameter settings of AOS methods from literature.

DE_ Four files with parameter ranges for four DE variants

Random.txt DE with random parameter selection

Target runner-

  • target-runner-target-vs-fe.py is same as target-runner-hv that calculates the area under the curve generated using trace file. The ECDF graph represents log10(FEvals/dim) vs fraction of targets solved for a problem.
  • target-runner-error-vs-fe.py calculates the area under the curve generated using trace file. The ECDF graph represents log10(FEvals/dim) vs best fitness seen for different targets for a problem.
  • target-runner-best.py receives the best fitness value seen for a problem.

training_set.txt Randomly selected bbob function instances as training set.

Parameter mapping from code to the document (thesis)

Reward components and their parameters Pareto_Dominance: fix_appl -> fix_appl; Pareto_Rank: fix_appl -> fix_appl; Compass_projection: fix_appl -> fix_appl, theta -> mathsymbol(theta); Area_Under_The_Curve: window_size -> W, decay -> D; Sum_of_Rank -> window_size -> W, decay -> D; Success_Rate: max_gen -> max_gen, succ_lin_quad -> mathsymbol(gamma), frac -> Frac, noise -> mathsymbol(epsilon); Immediate_Success; Success_sum: max_gen -> max_gen; Normalised_success_sum_window: window_size -> W, normal_factor -> mathsymbol(omega); Normalised_success_sum_gen: max_gen -> max_gen; Best2gen: scaling_constant -> C, alpha -> mathsymbol(alpha), beta -> mathsymbol(beta); Normalised_best_sum: max_gen -> max_gen, intensity -> mathsymbol(rho), alpha -> mathsymbol(alpha)

Quality components and their parameters Weighted_sum: decay_rate -> mathsymbol(delta); Upper_confidence_bound: scaling_factor -> c; Quality_Identity; Weighted_normalised_sum: decay_rate -> mathsymbol(delta), q_min -> q_min; Bellman_Equation: weight_reward -> c1, weight_old_reward -> c2, discount_rate -> mathsymbol(gamma)

Probability components and their parameters Probability_Matching: p_min -> p_min, error_prob -> mathsymbol(epsilon); Adaptive_Pursuit: p_min -> p_min, p_max -> p_max, learning_rate -> mathsymbol(mu); Probability_Identity

Selection components and their parameters Proportional; Greedy; Epsilon-Greedy: sel_eps -> eps; Proportional_Greedy: sel_eps -> eps; Linear_Annealed

Reference

If you use this repository, please cite the following paper:

@article{sharma2020unified,
title={Unified Framework for the Adaptive Operator Selection of Discrete Parameters},
author={Sharma, Mudita and Lopez-Ibanez, Manuel and Kazakov, Dimitar},
journal={arXiv preprint arXiv:2005.05613},
year={2020}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages