Skip to content

Conversation

@priyakasimbeg
Copy link
Contributor

No description provided.

@priyakasimbeg priyakasimbeg changed the title Documentation update Documentation updates Oct 11, 2023
@github-actions
Copy link

github-actions bot commented Oct 11, 2023

MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅

@priyakasimbeg priyakasimbeg marked this pull request as ready for review October 12, 2023 19:37
@priyakasimbeg priyakasimbeg requested a review from a team as a code owner October 12, 2023 19:37
Copy link
Contributor

@fsschneider fsschneider left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the great PR. The changes look good, just a few minor nits.

I think the answer to the "other hardware" needs to be modified but the rest of my comments are just minor changes.

README.md Outdated
### How can I know if my code can be run on benchmarking hardware?
The benchmarking hardware specifications are documented in the [Getting Started Document](./getting_started.md).
Please monitor your submission's memory usage so that it does not exceed the available memory
on the competition hardware.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we suggest something like "do a dry run for a few iterations using a cloud instance"?
In the end, they will have to do some runs on the competition hardware themselves anyway. Even if they use compute support, they will have to self-report numbers on the qualification set.
So I am not completely sure what this question is about.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the concern is around cross-platform functionality of submissions and maybe even cross platform performance guarantees.

README.md Outdated
Please monitor your submission's memory usage so that it does not exceed the available memory
on the competition hardware.
### Are we allowed to use our own hardware to self-report the results?
No. However you are allowed to use your own hardware to report the best hyperparameter point to qualify for
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should phrase it a bit more positively?
For the external ruleset, out of the 100 (5x20) runs they have to do, they only have to do 5 on the competition hardware. The rest, they can do on their hardware.

So maybe it is a bit more accurate to say something along the lines of "You only have to use the competition hardware for runs that are directly involved in the scoring procedure. This includes all runs for the self-tuning ruleset, but only the runs of the best hyperparameter configuration in each study for the external tuning ruleset. For example, you could use your own (different) hardware to tune your submission and identify the best hyperparameter configuration (in each study) and then only run this configuration (i.e. 5 runs, one for each study) on the competition hardware."

Or something with a better formulation :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Btw. they can use this even when they self-report on the full benchmark set, not only if they use the qualification set.

However, for the qualification set, they still have to use competition hardware for other scored runs.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok thanks for the clarification!

@priyakasimbeg priyakasimbeg merged commit 909e17c into dev Oct 23, 2023
@github-actions github-actions bot locked and limited conversation to collaborators Oct 23, 2023
@priyakasimbeg priyakasimbeg deleted the documentation_update branch November 2, 2023 22:23
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants