Motivated by the fact that different optimizers work well on different problems, our approach switches between different optimizers. Since the team names on the competition's leaderboard were randomly generated, consisting of an adjective and an animal with the same initial letter, we called our approach the Switching Squirrel, short, Squirrel.
In our Squirrel framework, we switched between the following components:
- An initial design (for known hyperparameter spaces: found by meta-learning; otherwise: selected by differential evolution) (3 batches);
- Optimization using Bayesian optimization by integrating the SMAC optimizer with a portfolio of different triplets of surrogate model, acquisition function, and output space transformation (8 batches); and
- Optimization using Differential Evolution with parameter adaptation (5 batches)
Our Squirrel ranked 3rd with a score of 92.551 on offical learderboard, and also won 1st place in alternate leaderboard (with a score of 94.845476 and the organizers' bootstrap analysis showing a 100% confidence in this 1st place ranking).
We used the Bayesmark benchmark framework for the local experiments with Squirrel. See the Bayesmark documentation for the details.
> python3 -m venv venv # Please use Python 3.6.10.
> source venv/bin/activate
> pip install -r environment.txt -r squirrel-optimizer/requirements.txt
> ./run_local.sh squirrel-optimizer/ 3
...
--------------------
Final score `100 x (1-loss)` for leaderboard:
optimizer
squirrel-optimizer_0.0.6_6434ac2 102.238945
- Noor Awad
- Gresa Shala
- Difan Deng
- Neeratyoy Mallik
- Matthias Feurer
- Katharina Eggensperger
- Andre' Biedenkapp
- Diederick Vermetten
- Hao Wang
- Carola Doerr
- Marius Lindauer
- Frank Hutter
Our implementation is released under Apache License 2.0.