New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The details of results in the paper #17
Comments
Thanks for your interest in NATS-Bench. (1), Yes, for the full training, hp=200 for tss, hp=90 for ss. For the low-fidelity approximation, both tss and sss use hp=12. (2) all accuracy is reported for top-1. (3) Yes, could you mind clarifying what do you want? There are many metrics. It depends on your demand to choose one? |
FYI, here is an usage example on how to obtain the test accuracy (that is used to report in the tables in our paper).
The |
Thanks for all! I try to do some research about NAS algorithms and want to use the NATS-Bench as dataset,hence I must to ensure correct information about the results so that I can directly compare with the benchmark algorithms in paper. And the validation of a new arch corresponding to in paper is xinfo["valid-accuracy"]? |
@Littleyezi For the validation accuracy, it is a little bit difference:
|
Thanks! |
U are welcome |
Hi, I want to use the NATS-Bench, but I still have some questions about the results of benchmark algorithms given in the paper.
(1) Hp=200 in tss and hp=90 in sss, right?
(2)the results is top1 or top5 or others?
(3)Sorry, but can I trouble you to wirte a usage example to show how can I get the correct metrics when I search a new arch?
Thanks!
The text was updated successfully, but these errors were encountered: