You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jul 3, 2024. It is now read-only.
Hi,
Thanks for open-sourcing the code behind the benchmark.
I am trying to reproduce some results and it seems that the script 'run_evaluation.py' does generate some checkpoints & a results.txt files for each relevant model of the graph, but not that "nasbench.tfrecord" file. Could you please point out to the appropriate script? So far it appears to be missing...
Thanks & Regards
K. Rene Traore
The text was updated successfully, but these errors were encountered:
Sorry for not responding to this question earlier. I don't receive notifications on this repo anymore but I wanted to make sure this gets a response even if it is late.
I don't have the original script which generated the TFRecord file anymore but I can generally explain how it was generated.
Hi,
Thanks for open-sourcing the code behind the benchmark.
I am trying to reproduce some results and it seems that the script 'run_evaluation.py' does generate some checkpoints & a results.txt files for each relevant model of the graph, but not that "nasbench.tfrecord" file. Could you please point out to the appropriate script? So far it appears to be missing...
Thanks & Regards
K. Rene Traore
The text was updated successfully, but these errors were encountered: