-
Notifications
You must be signed in to change notification settings - Fork 371
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is multi-processing supported? #35
Comments
Not with live mode. We don't have any way right now for one UI to be ingesting data from multiple processes. What we do have is the This will only work if it's forking and not exec'ing - meaning that it will be able to gather meaningful data if you use a
|
Are there any plans to create/extend a reporter to accept and integrate data from multiple capture files? I'm wrapping a multi-worker gunicorn process with memray and I end up with a capture file per worker. Inspecting them separately is useful, but inspecting them all merged together would also provide some insights. |
There aren't any such plans. When we discussed amongst ourselves, the consensus was that trying to analyze information from multiple processes at the same time was likely to cause more confusion than anything else, and we had trouble coming up with any cases where seeing, say, multiple workers at once would tell you anything that you wouldn't be able to identify by analyzing them individually. In fact, for the gunicorn case, I would think that what would make the most sense is just to drop the number of workers down to 1 while you're investigating it, so that all requests are reaching the same worker instance. But you might be seeing something we didn't - can you describe a case where there's some interesting feature of the memory usage of a pool of worker processes that would be difficult to identify by looking at their allocations individually, but easy to identify by looking at their allocations in aggregate? |
Great, The script I provide is to trace the So does it means we can not use |
When a process is forked, the memory maps are shared between the part and the child until a write happens, as you indicate. When the write happens it triggers an implicit interrupt generated directly from the This means that all of this happens in kernel space and therefore So the answer is sadly that is very unlikely that you can use many common profilers to properly trace COW unless they allow instrumentation (like |
This is pretty reasonable. Thank you guys so much! |
Thank you guys for this amazing beautiful cool tool!
Feature Request
I am dealing with some memory problems related to
pytorch dataloader
for several days. And just triedmemray
with a simple script below. I found that in the live mode, the information of main process is reported but all processes are detected as threads and no information is reported.Screenshot of main process:
screen shot of other process:
The following command
memray run --live simple_multi_worker.py
is used.Is there a way to observe multi-processing information?
The text was updated successfully, but these errors were encountered: