-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add (new) mpl benchmarks #404
Conversation
c8e1ca8
to
c071892
Compare
6a57471
to
44a9657
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding the benchmarks @Firobe!
The config files have the correct benchmarks added to them. But, the benchmarks/mpl/bench
directory has more benchmarks that are already present in sandmark. Perhaps we can remove the already available ones? You can see the existing parallel benchmarks in multicore-numerical
and other directories with multicore-prefix
.
44a9657
to
997c358
Compare
I think it should now be good to go! |
2c6c991
to
3953472
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the updates. Looks good to me!
Thanks for the review! LGTM. |
Do these benchmarks appear on the sandmark nightly results? @shakthimaan If not, can you make them appear on the nightly results? |
We should add a |
Do we know that these benchmarks deserve |
One way to find out is to add the |
All the benchmarks have a |
In addition to the missing
I also think this is the reason why graph500 results also don't appear in the nightly results. It appears that this part of the codebase needs a little bit of love and attention. Someone should:
Unfortunately, the current state is less than ideal. We don't yet have the tools/process to review and catch these before this PR and similar others are merged. |
Thanks for the review of the benchmarks and the notes. I have not looked at the new mpl benchmarks in question in this PR, but I agree that the current state of the benchmarks should be improved.
graph500 results were disabled because the benchmark implementation was not scalable, and we have disabled the results from showing up in the UI until we fix the scalability. Building a checklist for the contributors and/or reviewers would be a good first step, and then some tooling around making sure at least the easily verifiable checklist requirements are being followed. |
Tooling here is a bit tricky. One could possibly build this as a
Thanks for the clarification here. |
Okay, I'm going to update the |
Thanks @Firobe 👍 |
See #439 |
Close #401
Add
msort_ints
,msort_strings
,primes
,tokens
,raytracer
from https://github.com/MPLLang/parallel-ml-bench/tree/main/ocaml/bench to the parallel benchs (with 1, 2, 4, 8, 16, 32 cores each) on Turing and Navajo.