Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The details of results in the paper #17

Closed
Littleyezi opened this issue Jun 8, 2021 · 6 comments
Closed

The details of results in the paper #17

Littleyezi opened this issue Jun 8, 2021 · 6 comments
Assignees

Comments

@Littleyezi
Copy link

Hi, I want to use the NATS-Bench, but I still have some questions about the results of benchmark algorithms given in the paper.
(1) Hp=200 in tss and hp=90 in sss, right?
(2)the results is top1 or top5 or others?
(3)Sorry, but can I trouble you to wirte a usage example to show how can I get the correct metrics when I search a new arch?
Thanks!

@D-X-Y
Copy link
Owner

D-X-Y commented Jun 8, 2021

Thanks for your interest in NATS-Bench.

(1), Yes, for the full training, hp=200 for tss, hp=90 for ss. For the low-fidelity approximation, both tss and sss use hp=12.

(2) all accuracy is reported for top-1.

(3) Yes, could you mind clarifying what do you want? There are many metrics. It depends on your demand to choose one?

@D-X-Y
Copy link
Owner

D-X-Y commented Jun 8, 2021

FYI, here is an usage example on how to obtain the test accuracy (that is used to report in the tables in our paper).

xinfo = api.get_more_info(arch, dataset=dataset, hp=90 if is_size_space else 200, is_random=False)
test_accuracy = xinfo["test-accuracy"]

The arch indicate the searched architecture, the dataset indicates one of the three datasets used in our paper, is_size_space is a boolean value indicating whether it is sss or tss.

@D-X-Y D-X-Y self-assigned this Jun 8, 2021
@Littleyezi
Copy link
Author

Thanks for all! I try to do some research about NAS algorithms and want to use the NATS-Bench as dataset,hence I must to ensure correct information about the results so that I can directly compare with the benchmark algorithms in paper. And the validation of a new arch corresponding to in paper is xinfo["valid-accuracy"]?

@D-X-Y
Copy link
Owner

D-X-Y commented Jun 9, 2021

@Littleyezi For the validation accuracy, it is a little bit difference:

if dataset == "cifar10": 
  xinfo = api.get_more_info(arch, dataset="cifar10-valid", hp=90 if is_size_space else 200, is_random=False)
  valid_acc = xinfo["valid-accuracy"]
else:
  xinfo = api.get_more_info(arch, dataset=dataset, hp=90 if is_size_space else 200, is_random=False)
  valid_acc = xinfo["valid-accuracy"]

@Littleyezi
Copy link
Author

@Littleyezi For the validation accuracy, it is a little bit difference:

if dataset == "cifar10": 
  xinfo = api.get_more_info(arch, dataset="cifar10-valid", hp=90 if is_size_space else 200, is_random=False)
  valid_acc = xinfo["valid-accuracy"]
else:
  xinfo = api.get_more_info(arch, dataset=dataset, hp=90 if is_size_space else 200, is_random=False)
  valid_acc = xinfo["valid-accuracy"]

Thanks!

@D-X-Y
Copy link
Owner

D-X-Y commented Jun 9, 2021

U are welcome

@D-X-Y D-X-Y closed this as completed Jun 9, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants