This repository is the official implementation of MbML-NAS (Model-based Meta-Learning for NAS).
The Meta-NAS-Benchmarks built from the original NAS-Bench-101 and NAS-Bench-201 benchmarks and used in our experiments can be downloaded here: https://drive.google.com/drive/folders/1V5dFi3iMHG0CdZW8vwV7hgbd3Zz_OjBN
There are different datasets and subsets which were built in order to make experiments more flexible and facilitate data exploration.
For NAS-Bench-101:
meta_nasbench101_4epochs: meta-info from 423k architectures trained by 4 epochs on CIFAR-10.
meta_nasbench101_12epochs: meta-info from 423k architectures trained by 12 epochs on CIFAR-10.
meta_nasbench101_36epochs: meta-info from 423k architectures trained by 36 epochs on CIFAR-10.
meta_nasbench101_108epochs: meta-info from 423k architectures trained by 108 epochs on CIFAR-10.
For NAS-Bench-201:
meta_nasbench201_cifar10valid: meta-info from 15k architectures trained by 1, 4, 12, 36, and 200 epochs on CIFAR-10.
meta_nasbench201_cifar100: meta-info from 15k architectures trained by 1, 4, 12, 36, and 200 epochs on CIFAR-100.
meta_nasbench201_imagenet16_120: meta-info from 15k architectures trained by 1, 4, 12, 36, and 200 epochs on ImageNet16-120.
P.S: There are subsets from each one of these datasets regarding a specific number of epochs. E.g: meta_nasbench201_cifar10valid_4epochs, meta_nasbench201_imagenet16_120_200epochs, etc.
To train the meta-predictors from the paper, run the notebooks nasbench_acc_prediction.ipynb
in notebooks/NAS-Bench-101 and nasbench201_acc_prediction.ipynb
in notebooks/NAS-Bench-201.
To see the final results generated from training on NAS-Bench-101 and NAS-Bench-201, see files nasbench_result_analysis.ipynb
from notebooks/NAS-Bench-101, and nasbench201_result_analysis.ipynb
from notebooks/NAS-Bench-201.
You can download the pre-trained meta-predictors here: https://drive.google.com/drive/folders/15RQJILgKgZ76PRgH1cFL7YUKSLELfK1-