New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Top-5%/top-64 computation #11
Comments
From the top of my head I would say all models from the search space were included. |
Correct. They are the top-64 models in the entire search space. The idea is to quantify the degree by which zero-cost warmup improves the sampled architectures. If you took 64 random models then the number of top-5 models would simply be 5% of 64 = 3 models. However, when we use a zero-cost metric like synflow, and take the top 64 models in the search space, we increase that number significantly as shown in the tables. So this comparison shows the best case scenario of zero-cost warmup. It would be interesting to also try it out with smaller warmup sizes as you suggested and that should be somewhat straightforward to do. If you end up doing this experiment, we'd love a pull request :) |
Hello,
Thanks for a great paper.
When you compute top-5%/top-64 score (Tables 4, 11), how many architectures are there in total?
Is it 3000 architectures (only warmup) or the size of the entire dataset?
Cheers,
Ekaterina
The text was updated successfully, but these errors were encountered: