New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
leverage shared memory #27
Comments
Same question for me |
Yes, this is on the 'todo' list. Good thing is I have some toy code for this already - just not ready for g.p. use in the trunk. Should get to it post-release. |
Is it possible to leverage shared memory now? I have a issue very similar to OP from this thread: http://stackoverflow.com/questions/28740955/working-with-pathos-multiprocessing-tool-in-python-and I have a dict that gets updated as part of a decorator and, like your example in the thread, ends up empty. Any new developments I might be able to use for this? Thanks, I really appreciate the effort going into pathos. Just started using it today and it's great! |
Shared memory can be accessed from
I think this is sufficient, unless there's a need for a shared memory |
@mmckerns thanks for pathos. does this mean that pathos solves or greatly alleviates the slow arguments passing angle of the original standard multiprocessing pool ― in ways leveraging shared memory or equivalently fast methods ― without the caller having to explicitly set up shared memory in their user code for that? |
@matanster: Unfortunately, no.
|
Hey thanks a lot for letting me know. I started writing something that encapsulates a function call with python's Otherwise
wrapper functions combined with python's I guess that Ray provides one example for a design for shared memory in the context of concurrent dispatch. (Having mentioned numpy, some of numpy is actually concurrent as is, which goes against the idea of specializing concurrent dispatch for numpy). |
I'm not sure how much this topic touches close to the core of the goals of pathos. |
It sounds interesting. You might want to look at |
Thanks, it's useful to have these available as tested sub-components a la carte. The caching one too. |
I think that by only simplifying the cache interface of SharedMemory a tiny bit with judicious wrapping, shared memory can become as simple to apply to a program as it gets, thus expediting many single-machine concurrency workloads involving large data by a significant margin. |
could leverage shared memory using
ctypes
The text was updated successfully, but these errors were encountered: