-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory growth #359
Comments
I recall a certain PhD thesis showing just that. ;) So it must've been broken since. Maybe I need to force a garbage collection in the solver worker? |
This frankly silly factor of 2 improvement will likely help people struggling with memory problems. Will get it into master ASAP. |
Closed via #360. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
While attempting to generate some sort of empirical wisdom relating to memory footprint, my experiments showed the following:
This is the output of a
memory_profiler
run with--dist-ncpu 3
run on my laptop. Ignore child 3 and 4. The black curve is the overall memory usage. Child 0 is the I/O process, and it seems well behaved. However I cannot fathom the ramping in child 1 and 2. In my mind, they definitely shouldn't grow with time. This suggests that something is stored between tiles - I have tried to find it but to no avail. @o-smirnov have you noticed this behaviour? Or do you perhaps have some intuition regarding its origin? Note that this growth doesn't occur in the serial case. Additionally it has something to do with the number of chunks processed by the worker. If we look at the same experiment as above but with--dist-ncpu 5
we see that the memory footprint of each worker is slightly less at the end (though the growth is still apparent).My feeling is that the memory usage of the workers should have a heart-beat pattern, increasing when they allocate their temporary arrays and decreasing as they finish with a chunk.
The text was updated successfully, but these errors were encountered: