Skip to content
No description, website, or topics provided.
Python
Branch: master
Clone or download
aaronkl Merge pull request #18 from Neeratyoy/optim_for_multi_runs
Faster benchmark initialization for multiple runs
Latest commit 1b09906 Aug 23, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
data_collection
experiment_scripts experiment_scripts/run_regularized_evolution.py Aug 16, 2019
tabular_benchmarks Adding check for budget Aug 23, 2019
test fix unittests Apr 4, 2019
LICENSE Create LICENSE Jul 20, 2018
README.md Update README.md May 15, 2019
example.py unify Jan 28, 2019
setup.py Update setup.py Feb 6, 2019

README.md

Tabular Benchmarks for Hyperparameter Optimization and Neural Architecture Search

This repository contains code of tabular benchmarks for

  • HPOBench: joint hyperparameter and architecture optimization of feed forward neural networks on regression problems (see [1])
  • NASBench101: the architecture optimization of a convolutional neural network (see [2])

To download the datasets for the FC-Net benchmark:

wget http://ml4aad.org/wp-content/uploads/2019/01/fcnet_tabular_benchmarks.tar.gz
tar xf fcnet_tabular_benchmarks.tar.gz

The data for NASBench is available here.

To install it, type:

git clone https://github.com/automl/nas_benchmarks.git
cd nas_benchmarks
python setup.py install

The following example shows how to load the benchmark and to evaluate a random hyperparameter configuration:

from tabular_benchmarks import FCNetProteinStructureBenchmark

b = FCNetProteinStructureBenchmark(data_dir="./fcnet_tabular_benchmarks/")
cs = b.get_configuration_space()
config = cs.sample_configuration()

print("Numpy representation: ", config.get_array())
print("Dict representation: ", config.get_dictionary())

max_epochs = 100
y, cost = b.objective_function(config, budget=max_epochs)
print(y, cost)

To see how you can run different open-source optimizers from the literature, have a look on the python scripts in 'experiment_scripts' folder, which were also used to conducted the experiments in the papers.

References

[1] Tabular Benchmarks for Joint Architecture and Hyperparameter Optimization
    A. Klein and F. Hutter
    arXiv:1905.04970 [cs.LG]

[2] NAS-Bench-101: Towards Reproducible Neural Architecture Search
    C. Ying and A. Klein and E. Real and E. Christiansen and K. Murphy and F. Hutter
    arXiv:1902.09635 [cs.LG]
You can’t perform that action at this time.