-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deactive logs in benchmark scripts #59
Conversation
frapac
commented
Aug 24, 2021
- some minor formatting
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, @frapac for the improvements! I was thinking that the user may want to see the progress of running the benchmark. Now that we don't print the solver output, maybe we can at least show which problem it is currently solving?
other than that, everything looks good 👍 |
Good point. I think we already have some information with: https://github.com/sshin23/MadNLP.jl/blob/master/benchmark/benchmark-power.jl#L42 |
Aha, yes you're right. And yes, I think it would be good to make it as a command line option. |
Codecov Report
@@ Coverage Diff @@
## master #59 +/- ##
=======================================
Coverage 87.30% 87.30%
=======================================
Files 28 28
Lines 3018 3018
=======================================
Hits 2635 2635
Misses 383 383 Continue to review full report at Codecov.
|
* deactive logs in benchmark scripts * benchmark: add verbose option to main script
* Add AbstractKKTSystem structure * implement SparseReducedKKTSystem and SparseAugmentedKKTSystem * refactor Solver * Avoid unecessary allocations by forcing specialization * Deactive logs in benchmark scripts (#59) * deactive logs in benchmark scripts * benchmark: add verbose option to main script * barrier iterations (#61) * benchmark improvement (#60) * allocation issue fixed * added option buffered for NLPModels.jl * added option buffered for NLPModels.jl * added option buffered for NLPModels.jl * ma27 fix Co-authored-by: Sungho Shin <sshin@anl.gov>