-
-
Notifications
You must be signed in to change notification settings - Fork 30.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make semaphore_tracker track other system resources #81048
Comments
Hi all, Olivier Grisel, Thomas Moreau and myself are currently working on increasing multiprocessing.semaphore_tracker is a little known module, that launches a A note on why the semaphore_tracker was introduced: Cleaning up semaphores Now, Python 3.8 introduces shared memory segments creation. Shared memory is For this reason, we expanded the semaphore_tracker to also track shared memory Additionally, supporting shared memory tracking led to a more generic design Therefore, this issue serves two purposes:
|
Shared memory segments are now tracked by the brand new resource_tracker! Does anyone have an opinion on introducing a public API for users to make the What We have in mind is:
Under the hood, make_trackable simply populates resource_tracker._CLEANUP_FUNCS Here is a simple example: class ClassCreatingAFolder:
"""Class where each instance creates a temporary folder.
# any instance normally garbage-collected should remove its folder, and
# notice the resource_tracker that its folder was correctly removed.
util.Finalize(self, ClassCreatingAFolder.cleanup, args=(folder_name,))
# If this session quits abruptly, the finalizer will not be called for
# the instances of ClassCreatingAFolder that were still alive
# before the shutdown. The resource_tracker comes into play, and removes
# the folders associated to each of these resources.
resource_tracker.register(
folder_name, # argument to shutil.rmtree
"ClassCreatingAFolder")
@staticmethod
def cleanup(folder_name):
resource_tracker.unregister(folder_name, "ClassCreatingAFolder")
shutil.rmtree(folder_name)
# Tell the resource_tracker how to cleanup resources created by
# ClassCreatingAFolder instances
resource_tracker.make_trackable("ClassCreatingAFolder", shutil.rmtree) Typical resources that can be made trackable include memmaped objects, Any thoughts? |
Since the commit there is a warning in CI and locally while running tests. Travis CI :
Before commit : ./python.exe -X tracemalloc -m test --fail-env-changed test_multiprocessing_forkserver == Tests result: SUCCESS == 1 test OK. Total duration: 2 min 21 sec After commit f22cc69 : ./python.exe -X tracemalloc -m test --fail-env-changed test_multiprocessing_forkserver == Tests result: SUCCESS == 1 test OK. Total duration: 2 min 20 sec == Tests result: SUCCESS == 1 test OK. Total duration: 2 min 26 sec |
Yes, one test I wrote in an unrelated commit does not unlink a memory segment. Now the ResourceTracker complains. Fixing it now. |
Actually, I was properly unlinking the shared_memory segments. The warning messages are due to bad interactions between the ResourceTracker and the SharedMemoryManager object. In this particular case, it's easy to change a little bit the problematic test to avoid the warnings. I will focus on solving those bad interactions right after. |
test_shared_memory_cleaned_after_process_termination() uses time as a weak synchronization primitive: # killing abruptly processes holding reference to a shared memory
# segment should not leak the given memory segment.
p.terminate()
p.wait()
time.sleep(1.0) # wait for the OS to collect the segment
with self.assertRaises(FileNotFoundError):
smm = shared_memory.SharedMemory(name, create=False) Would it be possible to use a more reliable synchronization? Such test usually fail randomly. |
As Victor said, the
time.sleep(1.0) # wait for the OS to collect the segment
What do you think? |
We can do that, or maybe we can try to wait on the |
I like Olivier's pattern. Maybe we can slowly increase the sleep to stop shortly if the resource goes away shortly. deadline = time.monotonic() + 60.0
sleep = 0.010
while ...:
if ....: break
if time.monotonic() > deadline: ... assert error ...
sleep = min(sleep * 2, 5.0)
time.sleep(1.0) It's kind of a common pattern. Maybe it should be an helper in test.support module.
I prefer to make sure that the resource goes away without inspecting multiprocessing internals. |
It seems like all known bugs are fixed, I close again the issue. Thanks! |
The new test is not reliable, see: bpo-37244 "test_multiprocessing_forkserver: test_resource_tracker() failed on x86 Gentoo Refleaks 3.8". |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: