Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use the resource_tracker to unlink shared temporary memmaps #966

Merged
merged 81 commits into from
May 3, 2020

Conversation

pierreglaser
Copy link
Contributor

@pierreglaser pierreglaser commented Dec 4, 2019

Closes #806

Roadmap:

  • Make use of the new resource_tracker API to cleanup shared, temporary resources (e.g memory-mapped files backing np.memmap objects) of long lasting process as early as possible.

    • This was done before in an aggressive and not cross-OS manner by deleting the whole temporary folder entry in the file system upon the end of a Parallel call. In linux, this would only remove one reference to each underlying temporary file - their collection would be triggered once all other references to this files (open file handles) are closed. In Windows however, removing an entry from a filesystem is possible only if all file handles to this file are closed. This typically does not happen if:

      • The function executed by a child returns a object referencing a temporary memmap (for instance, a slice)
      • In the case of long lasting process, if the garbage collector is delayed because of complex reference schemes involving memmaps (for instance: cyclical references)
    • The proposed solution instead relies on the capability of the resource_tracker to carry out reference counting (and resource unlinking if the reference count reaches 0) of named resources. Parent and child processes now delegate resource management to the resource_tracker, which acts as a synchronizer, and will only delete resources when all of their associated file handles are closed.

    • This behavior is only implemented for the loky backend, as joblib's multiprocessing backend does not have a sharedresource_tracker between the parent and child processes. It does not matter though as multiprocessing-based pools terminate child process at the end of each pool.map() call, so all temporary named resources have a limited lifetime anyway.

  • In addition, we decided, in order to limit the complexity of resource management, to have child processes convert temporary joblib memmaps back to numpy arrays when sending data to the parent process.

  • Regarding the temporary folder (and not files), it is less critical to remove it as soon as possible. In order to limit potentially bad interactions between folder unlinking and the unlinking of the files it contains, we implement a "best effort" deletion attempt of joblib's temporary folder if empty at the end of each parallel call, and otherwise register an atexit finalizer to remove the said folder.

  • Add comprehensive tests.

@pierreglaser
Copy link
Contributor Author

Note that in order to work, this PR will need the improvements brought by tomMoral/loky#228.

joblib/disk.py Outdated Show resolved Hide resolved
@pierreglaser
Copy link
Contributor Author

rebased with master + vendored last loky version including joblib/loky#228

@ogrisel
Copy link
Contributor

ogrisel commented Feb 21, 2020

I pushed the new CI into your branch :)

@codecov
Copy link

codecov bot commented Feb 25, 2020

Codecov Report

❗ No coverage uploaded for pull request base (master@c5ebc67). Click here to learn what that means.
The diff coverage is 33.73%.

Impacted file tree graph

@@            Coverage Diff            @@
##             master     #966   +/-   ##
=========================================
  Coverage          ?   72.36%           
=========================================
  Files             ?       46           
  Lines             ?     6749           
  Branches          ?        0           
=========================================
  Hits              ?     4884           
  Misses            ?     1865           
  Partials          ?        0
Impacted Files Coverage Δ
joblib/parallel.py 83.51% <ø> (ø)
joblib/disk.py 88.46% <100%> (ø)
joblib/test/test_hashing.py 97.92% <100%> (ø)
joblib/test/test_parallel.py 56.37% <17.64%> (ø)
joblib/_parallel_backends.py 49.81% <25%> (ø)
joblib/_memmapping_reducer.py 50.26% <28.57%> (ø)
joblib/pool.py 27.73% <33.33%> (ø)
joblib/test/test_memmapping.py 40.1% <37.5%> (ø)
joblib/executor.py 35.29% <40%> (ø)
joblib/numpy_pickle.py 91.98% <44.44%> (ø)
... and 1 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update c5ebc67...c7b0d62. Read the comment docs.

@codecov
Copy link

codecov bot commented Feb 25, 2020

Codecov Report

Merging #966 into master will decrease coverage by 22.41%.
The diff coverage is n/a.

Impacted file tree graph

@@             Coverage Diff             @@
##           master     #966       +/-   ##
===========================================
- Coverage   94.78%   72.36%   -22.42%     
===========================================
  Files          45       46        +1     
  Lines        6803     6749       -54     
===========================================
- Hits         6448     4884     -1564     
- Misses        355     1865     +1510     
Impacted Files Coverage Δ
joblib/test/test_dask.py 4.43% <0.00%> (-90.67%) ⬇️
joblib/_dask.py 23.88% <0.00%> (-71.70%) ⬇️
joblib/_memory_helpers.py 3.07% <0.00%> (-70.77%) ⬇️
joblib/pool.py 27.73% <0.00%> (-67.10%) ⬇️
joblib/test/test_memmapping.py 40.10% <0.00%> (-59.11%) ⬇️
joblib/_multiprocessing_helpers.py 26.47% <0.00%> (-58.83%) ⬇️
joblib/backports.py 35.00% <0.00%> (-56.67%) ⬇️
joblib/executor.py 35.29% <0.00%> (-64.71%) ⬇️
joblib/test/test_store_backends.py 50.00% <0.00%> (-41.43%) ⬇️
joblib/_parallel_backends.py 49.81% <0.00%> (-45.02%) ⬇️
... and 28 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update de491a8...c7b0d62. Read the comment docs.

@codecov
Copy link

codecov bot commented Feb 25, 2020

Codecov Report

Merging #966 into master will not change coverage.
The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff           @@
##           master     #966   +/-   ##
=======================================
  Coverage   94.19%   94.19%           
=======================================
  Files          47       47           
  Lines        6649     6649           
=======================================
  Hits         6263     6263           
  Misses        386      386           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update ce67429...ce67429. Read the comment docs.

@pierreglaser
Copy link
Contributor Author

pierreglaser commented Feb 26, 2020

The implementation works everywhere on posix systems, and works fine for simple cases on my windows VM, but I still need to debug some more complex tests involving memmaps. I am also going to make a summary of my discussions with @ogrisel and post it here for the record.

@ogrisel ogrisel self-requested a review February 27, 2020 16:57
@albertcthomas
Copy link
Contributor

Thanks @pierreglaser and @ogrisel for this!

@pierreglaser
Copy link
Contributor Author

pierreglaser commented Mar 6, 2020

I'm running over non-deterministic deadlocks during pool termination in test_pool_memmap_with_big_offset, but I can reproduce on joblib master... For some reason, this is deterministic on windows, but not on posix systems.

@pierreglaser pierreglaser changed the title [WIP] Memmap reference counting Memmap reference counting Mar 7, 2020
@pierreglaser pierreglaser changed the title Memmap reference counting Use the resource_tracker to unlink shared temporary memmaps Mar 7, 2020
@albertcthomas
Copy link
Contributor

This PR should also close #942 and #944.

Copy link
Contributor

@ogrisel ogrisel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First pass of comments. More to come later today or tomorrow.

joblib/test/test_memmapping.py Outdated Show resolved Hide resolved
joblib/test/test_memmapping.py Outdated Show resolved Hide resolved
joblib/test/test_memmapping.py Outdated Show resolved Hide resolved
joblib/test/testutils.py Outdated Show resolved Hide resolved
joblib/test/test_parallel.py Outdated Show resolved Hide resolved
joblib/test/test_parallel.py Outdated Show resolved Hide resolved
joblib/test/test_memmapping.py Outdated Show resolved Hide resolved
joblib/test/test_memmapping.py Outdated Show resolved Hide resolved
@ogrisel
Copy link
Contributor

ogrisel commented Mar 11, 2020

I merged master to get the fix for the macos CI.

Copy link
Contributor

@ogrisel ogrisel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is another pass. I think we should never use a global JOBLIB_MMAPS module variable in the parent.

To track temporary memmap on the worker side (to avoid sending them back as memmap in the backward reducer) we should probably instead use a "_joblib_temporary = True" attribute on them at unpickle time.

joblib/backports.py Outdated Show resolved Hide resolved
joblib/disk.py Outdated Show resolved Hide resolved
joblib/executor.py Outdated Show resolved Hide resolved
joblib/disk.py Outdated Show resolved Hide resolved
def __init__(self, max_nbytes, temp_folder, mmap_mode, verbose=0,
prewarm=True):
def __init__(self, max_nbytes, temp_folder, mmap_mode,
track_memmap_in_child, verbose=0, prewarm=True):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

track_memmap_in_child should be renamed to unlink_memmap_on_gc_collect as ArrayMemmapReducer has no notion of parent and child by itself. Only the callers do.

Copy link
Contributor Author

@pierreglaser pierreglaser Mar 31, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll go with unlink_on_gc_collect. I like it as an argument of memmap re-constructors, but for the reducer itself, I like better give_ownership_to_loader. WDYT?

joblib/_memmapping_reducer.py Outdated Show resolved Hide resolved
joblib/_memmapping_reducer.py Outdated Show resolved Hide resolved
joblib/_memmapping_reducer.py Outdated Show resolved Hide resolved
joblib/_parallel_backends.py Outdated Show resolved Hide resolved
joblib/_parallel_backends.py Outdated Show resolved Hide resolved
Copy link
Contributor

@ogrisel ogrisel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe the code would be simpler to follow if we had two classes: ArrayMemmapForwardReducer
and a simpler ArrayMemmapBackwardReducer.

That might induce a bit of code duplication but that would also probably simplify the arguments handling. WDYT?

@pierreglaser
Copy link
Contributor Author

Yes it's definitely possible to do this instead. I think the most important benefit is that some branches of the ArrayMemmapReducer are designed exclusively for parent serialization, and some are designed exclusively for child serialization, but

  • because they are exclusive, they can be split apart.
  • splitting them would be beneficial because unless you know very well the code, it is not explicit which branch is a child one, and which one is for the parent.

@ogrisel ogrisel self-requested a review March 14, 2020 11:59
@pierreglaser
Copy link
Contributor Author

@ogrisel I still have to push a few changes before you can review again.

@albertcthomas
Copy link
Contributor

I have failing tests when running pytest joblib\test\test_memmapping.py.

But I guess I should wait for the loky update?

let's release loky 2.6.1 2.7.0 and sync it up in this PR and then merge this PR.

@pierreglaser
Copy link
Contributor Author

pierreglaser commented Apr 23, 2020

But I guess I should wait for the loky update?

This PR should work out of the box - it vendors the loky version with the necessary changes.
The sync up part is probably just a couple metadata changes AFAICT.

Which tests are failing? Can you find an easy explanation of why the CI passes but the test suite fails locally?

@albertcthomas
Copy link
Contributor

Output of pytest joblib\test\test_memmapping.py. I also have a lot of DEBUG information.

============================================= FAILURES ============================================== _____________________ test_permission_error_windows_memmap_sent_to_parent[loky] _____________________

backend = 'loky'

    @with_numpy
    @parametrize("backend", ["multiprocessing", "loky"])
    def test_permission_error_windows_memmap_sent_to_parent(backend):
        # Second non-regression test for:
        # https://github.com/joblib/joblib/issues/806
        # previously, child process would not convert temporary memmaps to numpy
        # arrays when sending the data back to the parent process. This would lead
        # to permission errors on windows when deleting joblib's temporary folder,
        # as the memmaped files handles would still opened in the parent process.
        cmd = '''if 1:
            import os

            import numpy as np

            from joblib import Parallel, delayed
            from testutils import return_slice_of_data

            data = np.ones(int(2e6))

            if __name__ == '__main__':
                slice_of_data = Parallel(n_jobs=2, verbose=5, backend='{b}')(
                    delayed(return_slice_of_data)(data, 0, 20) for _ in range(10))
        '''.format(b=backend)

        env = os.environ.copy()
        env['PYTHONPATH'] = os.path.dirname(__file__)
        p = subprocess.Popen([sys.executable, '-c', cmd],
                             stderr=subprocess.PIPE,
                             stdout=subprocess.PIPE, env=env)
        p.wait()
        out, err = p.communicate()
        assert p.returncode == 0, err
        assert out == b''
        if sys.version_info[:3] not in [(3, 8, 0), (3, 8, 1)]:
            # In early versions of Python 3.8, a reference leak
            # https://github.com/cloudpipe/cloudpickle/issues/327, holds references
            # to pickled objects, generating race condition during cleanup
            # finalizers of joblib and noisy resource_tracker outputs.
>           assert b'resource_tracker' not in err
E           assert b'resource_tracker' not in b"[Parallel(n_jobs=2)]: Using backend LokyBackend with 2 concurrent workers.\r\n[Parallel(n_jobs=2)]: Done  10 out of ...kl: FileNotFoundError(2, 'The system cannot find the path specified')\r\n  'resource_tracker: %s: %r' % (name, e))\r\n"

joblib\test\test_memmapping.py:381: AssertionError
________________ test_resource_tracker_silent_when_reference_cycles[multiprocessing] ________________

backend = 'multiprocessing'

    @with_numpy
    @with_multiprocessing
    @parametrize("backend", ["multiprocessing", param("loky", marks=xfail)])
    def test_resource_tracker_silent_when_reference_cycles(backend):
        # There is a variety of reasons that can make joblib with loky backend
        # output noisy warnings when a reference cycle is preventing a memmap from
        # being garbage collected. Especially, joblib's main process finalizer
        # deletes the temporary folder if it was not done before, which can
        # interact badly with the resource_tracker. We don't risk leaking any
        # resources, but this will likely make joblib output a lot of low-level
        # confusing messages. This test is marked as xfail for now: but a next PR
        # should fix this behavior.
        # Note that the script in ``cmd`` is the exact same script as in
        # test_permission_error_windows_reference_cycle.
        cmd = """if 1:
            import numpy as np
            from joblib import Parallel, delayed


            data = np.random.rand(int(2e6)).reshape((int(1e6), 2))

            # Build a complex cyclic reference that is likely to delay garbage
            # collection of the memmapped array in the worker processes.
            first_list = current_list = [data]
            for i in range(10):
                current_list = [current_list]
            first_list.append(current_list)

            if __name__ == "__main__":
                results = Parallel(n_jobs=2, backend="{b}")(
                    delayed(len)(current_list) for i in range(10))
                assert results == [1] * 10
        """.format(b=backend)
        p = subprocess.Popen([sys.executable, '-c', cmd], stderr=subprocess.PIPE,
                             stdout=subprocess.PIPE)
        p.wait()
        out, err = p.communicate()
        assert p.returncode == 0, out.decode()
>       assert b"resource_tracker" not in err, err.decode()
E       AssertionError: ~\repos\joblib\joblib\externals\loky\backend\resource_tracker.py:303: UserWarning: resource_tracker: ~\AppData\Local\Temp\joblib_memmapping_folder_18632_2571083238816\18632-2571083638264-0a27cd2d120c4a09a79517f41e91bf00.pkl: FileNotFoundError(2, 'The system cannot find the path specified')
E           'resource_tracker: %s: %r' % (name, e))
E
E       assert b'resource_tracker' not in b"~\\repos\\joblib\\joblib\\externals\\loky\\backend\\resource_tracker.py:303: UserWarning: resource_track...kl: FileNotFoundError(2, 'The system cannot find the path specified')\r\n  'resource_tracker: %s: %r' % (name, e))\r\n"

joblib\test\test_memmapping.py:433: AssertionError
=================== 2 failed, 28 passed, 6 skipped, 1 xfailed in 61.70s (0:01:01) ===================

@albertcthomas
Copy link
Contributor

The one that is always failing is test_permission_error_windows_memmap_sent_to_parent[loky], the failure of the other looks random.

@pierreglaser
Copy link
Contributor Author

I just sent you a gitter mesage to work out the details out of this thread.

@pierreglaser
Copy link
Contributor Author

So after discussing with @albertcthomas, it looks like in some cases, the unlink call of the resource_tracker triggered by a maybe_unlink request could predate the moment the mmap file handle is closed, yielding a PermissionError. I'm trying to reproduce in the CI by repeating the test in question many times, without success.

As a first remark, I think we should downgrade the logging level of the resource_tracker erroring out because of FileNotFoundErrors (e.g when nothing is leaked). The reason is that in some cases, the Parallel call is so fast that the atexit finalizer deleting the temporary folder is ran prior to the resource_tracker being created and processing maybe_unlink calls, which results in noisy FileNotFoundErrors. This is pretty cheap to do so I'm in favor of doing it.

With regards to addressing the PermissionError race condition:

  • Either we wait for user feedback and roll out a first release with the current code, and see if @albertcthomas's situation is common or not.
  • We can (cheap but brittle), make the resource_tracker wait a bit before unlinking the file.
  • The ultimate solution would be to do what shared_memory does: rely on anonymous file mapping (using _winapi and mmap) to create our np.memmap objects, and let the OS cleanup the resource when it is no longer used. In this case, we don't need the refcounting API of loky's resource_tracker. This comes at the additional cost of maintaining low-level windows-specific code.

@albertcthomas
Copy link
Contributor

Thanks very much for investigating this @pierreglaser! I just wanted to add that although the test fails because of the warnings associated to FileNotFoundError and PermissionError, the code does not crash and the temporary files are correctly deleted which is already a great improvement.

@ogrisel
Copy link
Contributor

ogrisel commented Apr 24, 2020

We can (cheap but brittle), make the resource_tracker wait a bit before unlinking the file.

+1 in favor of a Windows specific retry mechanism in the resource tracker that would only log an error after a bunch of iterations on in retry / sleep loop.

@ogrisel
Copy link
Contributor

ogrisel commented Apr 24, 2020

But I am also in favor of merging this as it is and do a second PR for the noisy error messages caused by the race condition.

@ogrisel ogrisel merged commit 8a1dcdb into joblib:master May 3, 2020
@ogrisel
Copy link
Contributor

ogrisel commented May 3, 2020

Merged!

@ogrisel
Copy link
Contributor

ogrisel commented May 3, 2020

@pierreglaser I let you do the retry mechanism of #966 (comment) in another PR?

@pierreglaser
Copy link
Contributor Author

Yes, let’s do this before releasing.

@pierreglaser
Copy link
Contributor Author

Would you rather patch loky's resource_tracker from joblib or change loky's resource_tracker file unlinking function directly inside loky?

@Schmetzler
Copy link

I get alot of warnings if I use the new patch... I am working with a tkinter program and when I click a button the parallel execution should be performed. The first time it works flawless, but if I click the button again I get a lot of :

C:\Program Files\Python37\lib\site-packages\joblib\externals\loky\backend\resource_tracker.py:304: UserWarning: resource_tracker: C:\Users\user\AppData\Local\Temp\joblib_memmapping_folder_4012_9455e11b1bd348fc9ffba3dab6ca3a5a_043efbe74cc64c50b30d29037bfc2968\4012-1707040744072-866706e27f604181a3bf82ef536dce6b.pkl: PermissionError(13, 'Der Prozess kann nicht auf die Datei zugreifen, da sie von einem anderen Prozess verwendet wird') 'resource_tracker: %s: %r' % (name, e))

and the execution gets really slow. My guess is that the tkinter app still holds on to some of the memory allocated

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Windows permission error
6 participants