New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance issue in optimization.utils.Pruning.__call__ #1013
Comments
Thanks for this check! A bit of context:
So right now, not sure what happens, but this is indeed critical :s |
If someone is blocked by the issue, as a workaround You could think about adding the profiling code to the repo. It has to be adapted if it should do the same command line parsing as
the workaround achieves a huge improvement:
|
I think #1014 may solve it |
@dietmarwo what do you mean by this? |
I've cut a release including the fix, let me know if it works for you |
Yes, it works.
The scenario which is described above works now without any workaround.
Get an
/usr/bin/nvidia-modprobe: unrecognized option: "-s"
message but this is unrelated and seems not a problem.
Am Di., 19. Jan. 2021 um 10:35 Uhr schrieb Jérémy Rapin <
notifications@github.com>:
… I've cut a release including the fix, let me know if it works for you
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1013 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AOIZMKTFVTLCAM33C2JIWPDS2VG6XANCNFSM4WHFAR4A>
.
|
what do you mean by this?
Initially I got the issue with tests with huge dimension settings. Then I
tried to find a scenario easier to reproduce.
I suspect with your solution my initial issue is also gone, but this needs
more tests.
Am Mo., 18. Jan. 2021 um 18:29 Uhr schrieb Jérémy Rapin <
notifications@github.com>:
… Testing with very large dimension (>= 4500) revealed there may be other
issues.
@dietmarwo <https://github.com/dietmarwo> what do you mean by this?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1013 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AOIZMKUOHVOJDFYUOWMB3W3S2RVYFANCNFSM4WHFAR4A>
.
|
"Testing with very large dimension (>= 4500) revealed there may be other
issues."
@dietmarwo <https://github.com/dietmarwo> what do you mean by this?
Actually with the new version testing high dimensional problems (dim >
4500) works fine.
Am Mo., 18. Jan. 2021 um 18:29 Uhr schrieb Jérémy Rapin <
notifications@github.com>:
… Testing with very large dimension (>= 4500) revealed there may be other
issues.
@dietmarwo <https://github.com/dietmarwo> what do you mean by this?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1013 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AOIZMKUOHVOJDFYUOWMB3W3S2RVYFANCNFSM4WHFAR4A>
.
|
Great to read this! Thanks again for highlighting the issue, that had really been a concern for me for months but I could not identify any actual case that would have such a slowdown. I'll close the issue since this is solved. |
"/usr/bin/nvidia-modprobe: unrecognized option: "-s""
Is related to an outdated nvidia cuda installation. On another machine with
actual cuda everything is fine.
One general remark:
Don't underestimate the possible performance gain of the new Python 3.8
feature
https://docs.python.org/3/library/multiprocessing.shared_memory.html.
FCMA has its own parallel optimization feature
https://github.com/dietmarwo/fast-cma-es/blob/master/fcmaes/retry.py
based on shared memory (an older variant). Compared to nevergrad at least
twice as much
optimizations can be performed using the same time on a modern 16 core
machine.
It is quite easy to compare since nevergrad and fcmaes both support the
FCMA optimizer.
Am Fr., 22. Jan. 2021 um 09:05 Uhr schrieb Jérémy Rapin <
notifications@github.com>:
… Closed #1013 <#1013>.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1013 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AOIZMKTJLAB3PA2NVODGOK3S3EWWDANCNFSM4WHFAR4A>
.
|
Seems interesting, where would you use it in nevergrad though? The optimizer in itself is not supposed to require sharing anything between multiple processes (except the scipy based optimizers, which are a bit special... and a pain :s) |
Steps to reproduce
Benchmark used:
Execute the profiler code below
Modify
nevergrad/nevergrad/optimization/utils.py
Lines 269 to 273 in 294aed2
Observed Results
There is a severe performance issue related to new_archive.bytesdict.
If you increase the second budget in
'for budget in [200, 30000]:' the execution time unrelated to the optimizer grows further.
Expected Results
Profiler Code
The text was updated successfully, but these errors were encountered: