-
Notifications
You must be signed in to change notification settings - Fork 75
Documentation updates #545
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅ |
fsschneider
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the great PR. The changes look good, just a few minor nits.
I think the answer to the "other hardware" needs to be modified but the rest of my comments are just minor changes.
README.md
Outdated
| ### How can I know if my code can be run on benchmarking hardware? | ||
| The benchmarking hardware specifications are documented in the [Getting Started Document](./getting_started.md). | ||
| Please monitor your submission's memory usage so that it does not exceed the available memory | ||
| on the competition hardware. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we suggest something like "do a dry run for a few iterations using a cloud instance"?
In the end, they will have to do some runs on the competition hardware themselves anyway. Even if they use compute support, they will have to self-report numbers on the qualification set.
So I am not completely sure what this question is about.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the concern is around cross-platform functionality of submissions and maybe even cross platform performance guarantees.
README.md
Outdated
| Please monitor your submission's memory usage so that it does not exceed the available memory | ||
| on the competition hardware. | ||
| ### Are we allowed to use our own hardware to self-report the results? | ||
| No. However you are allowed to use your own hardware to report the best hyperparameter point to qualify for |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we should phrase it a bit more positively?
For the external ruleset, out of the 100 (5x20) runs they have to do, they only have to do 5 on the competition hardware. The rest, they can do on their hardware.
So maybe it is a bit more accurate to say something along the lines of "You only have to use the competition hardware for runs that are directly involved in the scoring procedure. This includes all runs for the self-tuning ruleset, but only the runs of the best hyperparameter configuration in each study for the external tuning ruleset. For example, you could use your own (different) hardware to tune your submission and identify the best hyperparameter configuration (in each study) and then only run this configuration (i.e. 5 runs, one for each study) on the competition hardware."
Or something with a better formulation :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Btw. they can use this even when they self-report on the full benchmark set, not only if they use the qualification set.
However, for the qualification set, they still have to use competition hardware for other scored runs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok thanks for the clarification!
No description provided.