-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Speedup 09: Use shared memory for hypo-dd write_correlations for 20 % speedup #529
Speedup 09: Use shared memory for hypo-dd write_correlations for 20 % speedup #529
Conversation
I have always wanted to use shared memory for these kinds of jobs, but never seen the speed-ups to warrant the time spent puzzling over how to do it. Good to see that moving the data into shared memory helps, albeit not as much as it could given the copying. At some point it would be good to write some lower-level code to do this all properly (in C) so that we can actually access memory properly. The function for moving data to shared memory looks really helpful though and should be a useful starting off point for moving more to shared memory. I wonder if locks can help with avoiding the need to copy data to the worker memory? |
I'll have to look into how I can make locks work with this shared memory... |
I had a look into locks and couldn't see an obvious documented way to solve this problem. It might well be a dead end. |
Hmm, it seems odd that Windows doesn't like the shared memory filenames still. Any idea what could be going on there? |
…t so Windows does not destroy them
I think I found it finally :-) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me @flixha - Thanks!
What does this PR do?
Adds option to move trace data into shared memory for
utils.catalog_to_dd.write_correlations
for a ~20 % speedup.Why was it initiated? Any relevant Issues?
For a seismic swarm with ~1600 densely clustered events recorded by ~80 stations, I noticed that I was not able to make full use of all CPU cores when running
utils.catalog_to_dd.write_correlations
in parallel. I suspect that is due to the large amount of data that has to be shared with each worker process (all event , pick, and stream objects).This PR contributes to the summary issue in #522
PR Checklist
develop
base branch selected?CHANGES.md
.- [ ] First time contributors have added your name toCONTRIBUTORS.md
.