Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ansible fetch from target do not use streaming file transfer #615

Closed
dw opened this issue Aug 9, 2019 · 17 comments

Comments

@dw
Copy link
Owner

commented Aug 9, 2019

I think we've hit this issue.

2019-07-08 21:42:27,015 p=22109 u=root |  TASK [lms-support : Gather host support information] ***************************
2019-07-08 21:42:27,015 p=22109 u=root |  Monday 08 July 2019  21:42:27 +0000 (0:00:00.250)       0:00:11.641 ***********
2019-07-08 21:42:27,359 p=22109 u=root |  included: /opt/exabeam_installer/ansible/roles/lms/lms-support/tasks/run_support_on_host.yml for host1, host2
2019-07-08 21:42:27,479 p=22109 u=root |  TASK [lms-support : set_fact] **************************************************
2019-07-08 21:42:27,479 p=22109 u=root |  Monday 08 July 2019  21:42:27 +0000 (0:00:00.463)       0:00:12.105 ***********
2019-07-08 21:42:27,579 p=22109 u=root |  ok: [host1]
2019-07-08 21:42:27,596 p=22109 u=root |  ok: [host2]
2019-07-08 21:42:27,706 p=22109 u=root |  TASK [lms-support : Run support task on hosts] *********************************
2019-07-08 21:42:27,707 p=22109 u=root |  Monday 08 July 2019  21:42:27 +0000 (0:00:00.227)       0:00:12.333 ***********
2019-07-08 21:43:18,005 p=22109 u=root |  changed: [host2]
2019-07-08 21:43:47,972 p=22109 u=root |  changed: [host1]
2019-07-08 21:43:48,088 p=22109 u=root |  TASK [lms-support : Fetch output tar file] *************************************
2019-07-08 21:43:48,089 p=22109 u=root |  Monday 08 July 2019  21:43:48 +0000 (0:01:20.381)       0:01:32.715 ***********
2019-07-08 21:43:49,344 p=22109 u=root |  changed: [host2]
2019-07-08 21:43:54,815 p=22109 u=root |  ERROR! [task 29315] 21:43:54.814786 E mitogen: Maximum message size exceeded (got 218211706, max 134217728)

The task that triggered this was:

- name: Fetch output tar file
  fetch: dest="{{ support_root_dir }}/" src="{{ output_file }}" flat=yes
  ignore_errors: yes

Despite the ignore_errors: yes, ansible exited/crashed here.

Interestingly, the size of the file that it was trying to transfer was 163658638 bytes (~157MiB), not 218211706 bytes.

host1 is the local host running ansible, but connected over SSH.

Originally posted by @alexhexabeam in #279 (comment)

@dw

This comment has been minimized.

Copy link
Owner Author

commented Aug 9, 2019

Hi @alexhexabeam,

Your problem is slightly different -- fetch is not implemented with streaming transfer. The max_message_size limit is a sanity check, the protocol should never be coping with messages anywhere nearly that large, as they trigger memory spikes on every machine.

This is a problem of missing functionality in the extension -- uploads to the target are already streaming, but downloads from the target are still message-based. This was just never added because I forgot about it, and nobody complained.. until now.. :)

Thanks for reporting this, it should be a very quick fix, so will hopefully include it in 0.2.8

@dw

This comment has been minimized.

Copy link
Owner Author

commented Aug 9, 2019

The reason for the size expansion is almost certainly due to a horrible misfeature of Python 3 -- documented in #485. Are you using Python 3 on the controller or target?

A fix for that is much more involved. If you have SSH compression enabled (which is the default with Mitogen), this size expansion should almost completely disappear in terms of bandwidth usage.

@dw dw changed the title Ansible downloads do not use streaming file transfer Ansible fetch from target do not use streaming file transfer Aug 9, 2019

@alexhexabeam

This comment has been minimized.

Copy link

commented Aug 9, 2019

@dw Thanks for the attention on this.

We are using Python 2.7.5 (the one that ships with CentOS 7) on both, so it's unlikely to be caused by a Python 3 issue.

Our ansible.cfg is as follows:

[defaults]
any_errors_fatal=True
ask_sudo_pass=False
callback_whitelist=profile_tasks
deprecation_warnings=False
forks=32
gather_timeout=60
hash_behavior=merge
host_key_checking=False
jinja2_extensions = jinja2.ext.do
log_path=../ansible.log
pipelining=True
retry_files_enabled=False
roles_path=roles/plt:roles/lms:roles/uba:roles/soar
timeout = 90
strategy_plugins = mitogen-master/ansible_mitogen/plugins/strategy
strategy = mitogen_linear

[ssh_connection]
# ControlPersist 5 hours
retries=5
ssh_args = -o ControlMaster=auto -o ControlPersist=18000s -o ForwardAgent=yes

[privilege_escalation]
become = True
become_method = sudo

[callback_profile_tasks]
task_output_limit = 10
@dw

This comment has been minimized.

Copy link
Owner Author

commented Aug 9, 2019

Are both your controller and target machines on Python 2? The originator of the message would cause the size expansion -- in the case of file download, it's be the target machine. If it is not in fact Python 3, then that is verrry interesting, and requires further investigation

@dw

This comment has been minimized.

Copy link
Owner Author

commented Aug 9, 2019

The size expansion is almost exactly 33%, so it definitely looks like the same issue as #485

@dw

This comment has been minimized.

Copy link
Owner Author

commented Aug 9, 2019

Sorry, I see you wrote "on both": ) Apologies, I wasn't doubting your answer, just not reading it

@alexhexabeam

This comment has been minimized.

Copy link

commented Aug 9, 2019

We are definitely using Python 2.7.5 on both. RedHat (and in turn CentOS) backports all kinds of things, though, so maybe they grabbed some change that they shouldn't have.

We have had mitogen disabled since we hit this, so we may have had a slightly older RPM at the time. Our current package is python-2.7.5-76.el7.x86_64

dw added a commit that referenced this issue Aug 9, 2019

dw added a commit that referenced this issue Aug 9, 2019

dw added a commit that referenced this issue Aug 10, 2019

dw added a commit that referenced this issue Aug 10, 2019

issue #615: fix up FileService tests for new logic
Can't perform authorization test in the same process so easily any more
since it checks is_privileged

dw added a commit that referenced this issue Aug 10, 2019

Merge remote-tracking branch 'origin/dmw'
* origin/dmw:
  issue #615: fix up FileService tests for new logic
  issue #615: another Py3x fix.
  issue #615: Py3x fix.
  issue #615: update Changelog.
  issue #615: use FileService for target->controll file transfers
@dw

This comment has been minimized.

Copy link
Owner Author

commented Aug 10, 2019

This is now on the master branch and will make it into the next release. To be updated when a new release is made, subscribe to https://networkgenomics.com/mail/mitogen-announce/

Thanks for reporting this!

@alexhexabeam

This comment has been minimized.

Copy link

commented Aug 17, 2019

@dw This does not seem to be fixed.

This is easily reproducible by creating a 512MB file, then attempting to fetch it. I tested this with both compressible and uncompressible data.

I created the files on the remote machine (host2 in this example):

dd if=/dev/urandom of=/opt/exabeam/data/urandomtest bs=1M count=512
dd if=/dev/zero of=/opt/exabeam/data/zerotest bs=1M count=512

Then ran this playbook:

- name: test
  any_errors_fatal: True
  hosts: host2
  gather_facts: no
  become: yes
  become_method: sudo
  tasks:
    - name: fetch urandomtest
      fetch:
        src: /opt/exabeam/data/urandomtest
        dest: /tmp/
        flat: yes
    - name: fetch zerotest
      fetch:
        src: /opt/exabeam/data/zerotest
        dest: /tmp/
        flat: yes

Note that it hung after the error until I CTRL-C'd it after several minutes.

(.env) [exabeam@dev-20190816-233324-1 ansible]$ ansible-playbook -i ../inventory playbooks/test.yml

PLAY [test] **************************************************************************************************************************************************************************************************************************************************************************************************************************************************

TASK [fetch urandomtest] *************************************************************************************************************************************************************************************************************************************************************************************************************************************************
Saturday 17 August 2019  00:04:21 +0000 (0:00:00.299)       0:00:00.299 *******
ERROR! [task 11477] 00:04:40.856360 E mitogen: Maximum message size exceeded (got 715828094, max 134217728)
^CProcess WorkerProcess-1:
Traceback (most recent call last):
  File "/usr/lib64/python2.7/multiprocessing/process.py", line 258, in _bootstrap
 [ERROR]: User interrupted execution

    self.run()
  File "/opt/exabeam_installer/ansible/mitogen/ansible_mitogen/strategy.py", line 174, in wrap_worker__run
    lambda: worker__run(self)
  File "/opt/exabeam_installer/ansible/mitogen/mitogen/core.py", line 633, in _profile_hook
    return func(*args)
  File "/opt/exabeam_installer/ansible/mitogen/ansible_mitogen/strategy.py", line 174, in <lambda>
    lambda: worker__run(self)
  File "/opt/exabeam_installer/.env/lib/python2.7/site-packages/ansible/executor/process/worker.py", line 118, in run
    self._final_q
  File "/opt/exabeam_installer/.env/lib/python2.7/site-packages/ansible/executor/task_executor.py", line 140, in run
    res = self._execute()
  File "/opt/exabeam_installer/.env/lib/python2.7/site-packages/ansible/executor/task_executor.py", line 612, in _execute
    result = self._handler.run(task_vars=variables)
  File "/opt/exabeam_installer/ansible/mitogen/ansible_mitogen/mixins.py", line 116, in run
    return super(ActionModuleMixin, self).run(tmp, task_vars)
  File "/opt/exabeam_installer/.env/lib/python2.7/site-packages/ansible/plugins/action/fetch.py", line 95, in run
    slurpres = self._execute_module(module_name='slurp', module_args=dict(src=source), task_vars=task_vars)
  File "/opt/exabeam_installer/ansible/mitogen/ansible_mitogen/mixins.py", line 359, in _execute_module
    timeout_secs=self.get_task_timeout_secs(),
  File "/opt/exabeam_installer/ansible/mitogen/ansible_mitogen/planner.py", line 503, in invoke
    kwargs=planner.get_kwargs(),
  File "/opt/exabeam_installer/ansible/mitogen/ansible_mitogen/connection.py", line 445, in call
    return self._rethrow(recv)
  File "/opt/exabeam_installer/ansible/mitogen/ansible_mitogen/connection.py", line 431, in _rethrow
    return recv.get().unpickle()
  File "/opt/exabeam_installer/ansible/mitogen/mitogen/core.py", line 1177, in get
    msg = self._latch.get(timeout=timeout, block=block)
  File "/opt/exabeam_installer/ansible/mitogen/mitogen/core.py", line 2637, in get
    return self._get_sleep(poller, timeout, block, rsock, wsock, cookie)
  File "/opt/exabeam_installer/ansible/mitogen/mitogen/core.py", line 2654, in _get_sleep
    woken = list(poller.poll(timeout))
  File "/opt/exabeam_installer/ansible/mitogen/mitogen/parent.py", line 936, in _poll
    events, _ = mitogen.core.io_op(self._pollobj.poll, timeout)
  File "/opt/exabeam_installer/ansible/mitogen/mitogen/core.py", line 553, in io_op
    return func(*args), None
KeyboardInterrupt

The behavior is the same for the /dev/urandom data as for the /dev/zero data.

@dw dw reopened this Aug 17, 2019

@alexhexabeam

This comment has been minimized.

Copy link

commented Aug 17, 2019

Some more details about our environment:

We are using the system python, but have a virtualenv with several modules.

(.env) [exabeam@dev-20190816-233324-1 ansible]$ rpm -qa | grep python
python-ipaddress-1.0.16-2.el7.noarch
audit-libs-python-2.8.4-4.el7.x86_64
python2-rsa-3.4.1-1.el7.noarch
python-google-compute-engine-2.8.14-1.el7.noarch
python-decorator-3.4.0-3.el7.noarch
python-urlgrabber-3.10-9.el7.noarch
dbus-python-1.1.1-9.el7.x86_64
libxml2-python-2.9.1-6.el7_2.3.x86_64
python-six-1.9.0-2.el7.noarch
python-chardet-2.2.1-1.el7_1.noarch
python-backports-1.0-8.el7.x86_64
python-setuptools-0.9.8-7.el7.noarch
python-urllib3-1.10.2-5.el7.noarch
python2-boto-2.45.0-3.el7.noarch
libsemanage-python-2.5-14.el7.x86_64
python-IPy-0.75-6.el7.noarch
policycoreutils-python-2.5-29.el7_6.1.x86_64
python-libs-2.7.5-77.el7_6.x86_64
python-slip-0.4.0-4.el7.noarch
python-linux-procfs-0.4.9-4.el7.noarch
python-schedutils-0.4-6.el7.x86_64
python-configobj-4.7.2-7.el7.noarch
python-gobject-base-3.22.0-1.el7_4.1.x86_64
python-pycurl-7.19.0-19.el7.x86_64
python-jsonpatch-1.2-4.el7.noarch
python-prettytable-0.7.2-3.el7.noarch
python-jinja2-2.7.2-3.el7_6.noarch
python-slip-dbus-0.4.0-4.el7.noarch
python-kitchen-1.1.1-5.el7.noarch
python-pyudev-0.15-9.el7.noarch
python-perf-3.10.0-957.12.1.el7.x86_64
rpm-python-4.11.3-35.el7.x86_64
python2-pyasn1-0.1.9-7.el7.noarch
python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch
python-requests-2.6.0-1.el7_1.noarch
python-2.7.5-77.el7_6.x86_64
newt-python-0.52.15-4.el7.x86_64
libselinux-python-2.5-14.1.el7.x86_64
python-iniparse-0.4-9.el7.noarch
python-markupsafe-0.11-10.el7.x86_64
python-jsonpointer-1.9-2.el7.noarch
python-babel-0.9.6-8.el7.noarch
python-firewall-0.5.3-5.el7.noarch
(.env) [exabeam@dev-20190816-233324-1 ansible]$ pip freeze
DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7. More details about Python 2 support in pip, can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support
ansible==2.7.11
asn1crypto==0.24.0
backports.ssl-match-hostname==3.5.0.1
bcrypt==3.1.4
cffi==1.11.5
click==6.7
cryptography==2.3.1
docker-py==1.6.0
docopt==0.6.1
elasticsearch==6.2.0
enum34==1.1.6
Fabric==1.14.0
Flask==1.0.2
funcsigs==1.0.2
gevent==1.3.4
greenlet==0.4.13
idna==2.7
ipaddress==1.0.18
itsdangerous==0.24
Jinja2==2.10
MarkupSafe==0.23
marshmallow==3.0.0rc5
mock==2.0.0
parallel-ssh==1.6.3
paramiko==2.4.1
pbr==4.0.4
pem==18.1.0
pyasn1==0.4.3
pycparser==2.18
pyhocon==0.3.31
pymongo==3.6.0
PyNaCl==1.2.1
pyOpenSSL==18.0.0
pyparsing==2.2.0
python-dateutil==2.4.2
PyYAML==3.12
requests==2.9.1
six==1.10.0
ssh2-python==0.14.0
urllib3==1.22
websocket-client==0.35.0
Werkzeug==0.14.1
@alexhexabeam

This comment has been minimized.

Copy link

commented Aug 17, 2019

This was with mitogen 93c97a9

@dw

This comment has been minimized.

Copy link
Owner Author

commented Aug 17, 2019

Ansible's fetch action plug-in internally resorts to using the slurp module when become is active, presumably to work around filesystem permission problems since they can't sudo from an SFTP session.

This means your 512mb test file is getting base64-encoded and held in memory on the controller and the target. What a joke

Let me keep this one open, I might just reimplement fetch since it's a straightforward module.

@alexhexabeam

This comment has been minimized.

Copy link

commented Aug 17, 2019

@dw Thanks for the quick response. I've confirmed that we only hit this with become active.

@dw

This comment has been minimized.

Copy link
Owner Author

commented Aug 17, 2019

This also explains the strange 33% size increase 'bug' we saw before. It's base64 encoding. For a 512mb file, Ansible will read the whole 512mb into a string then base64-encode it. The base64-encoded string is then JSON-serialized to go in the task output.

Looking at slurp.py, both string references are still alive at exit, along with the JSON representation, so a 512mb copy will cause a 1.83 GiB memory usage spike on both machines.

The library is supposed to send 'dead messages' to the intended receiver when a message is dropped due to its size being exceeded, either it was never implemented or there is some problem with the implementation. So that's a separate bug, your run should not have hung.

It's possible to increase the message size limit, but that does not solve anything, it just kicks the can down the road, and makes similar problems like this one harder to spot. It's tempting to perhaps update the limit to 1 GiB or similar, but again this solution does not sit well with me.

As a more general solution to every situation where Ansible might pull something like this, really we want huge module outputs to be streamed just like file transfer. I might open a separate ticket for that

dw added a commit that referenced this issue Aug 17, 2019

issue #615: ansible: import Ansible fetch.py action plug-in
From ansible/ansible#9773a1f2896a914d237cb9926e3b5cdc0f004d1a

dw added a commit that referenced this issue Aug 17, 2019

@dw

This comment has been minimized.

Copy link
Owner Author

commented Aug 17, 2019

Your example playbook is now working on 'dmw' branch. It may take a few days to reach master, but this will be in 0.2.8.

Runtime to copy both files localhost->localhost drops from 47 seconds to 6.7 seconds. I think we can call this a win.

Note if you're copying huge files like this, you almost certainly want mitogen_ssh_compression: false in group vars

dw added a commit that referenced this issue Aug 17, 2019

dw added a commit that referenced this issue Aug 17, 2019

issue #615: ensure 4GB max_message_size is configured for task workers.
This 4GB limit was already set for MuxProcess and inherited by all
descendents including the context running on the target host, but it was
not applied to the WorkerProcess router.

That explains why the error from the ticket is being raised by the
router within the WorkerProcess rather than the router on the original
target.

dw added a commit that referenced this issue Aug 17, 2019

dw added a commit that referenced this issue Aug 17, 2019

dw added a commit that referenced this issue Aug 17, 2019

dw added a commit that referenced this issue Aug 17, 2019

issue #615: remove meaningless test
It has been dead code since at least 2015

dw added a commit that referenced this issue Aug 17, 2019

issue #615: remove meaningless test
It has been dead code since at least 2015

dw added a commit that referenced this issue Aug 17, 2019

Merge remote-tracking branch 'origin/dmw'
* origin/dmw:
  issue #615: ensure 4GB max_message_size is configured for task workers.
  issue #615: update Changelog.
  issue #615: route a dead message to recipients when no reply is expected
  issue #615: fetch_file() might be called with AnsibleUnicode.
  issue #615: redirect 'fetch' action to 'mitogen_fetch'.
  issue #615: extricate slurp brainwrong from mitogen_fetch
  issue #615: ansible: import Ansible fetch.py action plug-in
  issue #533: include object identity of Stream in repr()
  docs: lots more changelog
  issue #595: add buildah to docs and changelog.
  docs: a few more internals.rst additions

dw added a commit that referenced this issue Aug 17, 2019

Merge remote-tracking branch 'origin/dmw'
* origin/dmw:
  issue #533: update routing to account for DEL_ROUTE propagation race
  tests: use defer_sync() Rather than defer() + ancient sync_with_broker()
  tests: one case from doas_test was invoking su
  tests: hide memory-mapped files from lsof output
  issue #615: remove meaningless test
  issue #625: ignore SIGINT within MuxProcess
  issue #625: use exec() instead of subprocess in mitogen_ansible_playbook
  issue #615: regression test
  issue #615: update Changelog.
@dw

This comment has been minimized.

Copy link
Owner Author

commented Aug 17, 2019

@alexhexabeam Please reopen if somehow this still doesn't work, but become+fetch has its own test now. All the best

@dw dw closed this Aug 17, 2019

dw added a commit that referenced this issue Aug 18, 2019

Merge remote-tracking branch 'origin/v028' into stable
* origin/v028: (383 commits)
  Bump version for release.
  docs: update Changelog for 0.2.8.
  issue #627: add test and tweak Reaper behaviour.
  docs: lots more changelog concision
  docs: changelog concision
  docs: more changelog tweaks
  docs: reorder chapters
  docs: versionless <title>
  docs: update supported Ansible version, mention unsupported features
  docs: changelog fixes/tweaks
  issue #590: update Changelog.
  issue #621: send ADD_ROUTE earlier and add test for early logging.
  issue #590: whoops, import missing test modules
  issue #590: rework ParentEnumerationMethod to recursively handle bad modules
  issue #627: reduce the default pool size in a child to 2.
  tests: add a few extra service tests.
  docs: some more hyperlink joy
  docs: more hyperlinks
  docs: add domainrefs plugin to make link aliases everywhere \o/
  docs: link IS_DEAD in changelog
  docs: tweaks to better explain changelog race
  issue #533: update routing to account for DEL_ROUTE propagation race
  tests: use defer_sync() Rather than defer() + ancient sync_with_broker()
  tests: one case from doas_test was invoking su
  tests: hide memory-mapped files from lsof output
  issue #615: remove meaningless test
  issue #625: ignore SIGINT within MuxProcess
  issue #625: use exec() instead of subprocess in mitogen_ansible_playbook
  issue #615: regression test
  issue #615: update Changelog.
  issue #615: ensure 4GB max_message_size is configured for task workers.
  issue #615: update Changelog.
  issue #615: route a dead message to recipients when no reply is expected
  issue #615: fetch_file() might be called with AnsibleUnicode.
  issue #615: redirect 'fetch' action to 'mitogen_fetch'.
  issue #615: extricate slurp brainwrong from mitogen_fetch
  issue #615: ansible: import Ansible fetch.py action plug-in
  issue #533: include object identity of Stream in repr()
  docs: lots more changelog
  issue #595: add buildah to docs and changelog.
  docs: a few more internals.rst additions
  ci: update to Ansible 2.8.3
  tests: another random string changed in 2.8.3
  tests: fix sudo_flags_failure for Ansible 2.8.3
  ci: fix procps command line format warning
  Whoops, merge together lgtm.yml and .lgtm.yml
  issue #440: log Python version during bootstrap.
  docs: update changelog
  issue #558: disable test on OSX to cope with boundless mediocrity
  issue #558, #582: preserve remote tmpdir if caller did not supply one
  issue #613: must await 'exit' and 'disconnect' in wait=False test
  Import LGTM config to disable some stuff
  Fix up another handful of LGTM errors.
  tests: work around AnsibleModule.run_command() race.
  docs: mention another __main__ safeguard
  docs: tweaks
  formatting error
  docs: make Sphinx install soft fail on Python 2.
  issue #598: allow disabling preempt in terraform
  issue #598: update Changelog.
  issue #605: update Changelog.
  issue #605: ansible: share a sem_t instead of a pthread_mutex_t
  issue #613: add tests for all the weird shutdown methods
  Add mitogen.core.now() and use it everywhere; closes #614.
  docs: move decorator docs into core.py and use autodecorator
  preamble_size: make it work on Python 3.
  docs: upgrade Sphinx to 2.1.2, require Python 3 to build docs.
  docs: fix Sphinx warnings, add LogHandler, more docstrings
  docs: tidy up some Changelog text
  issue #615: fix up FileService tests for new logic
  issue #615: another Py3x fix.
  issue #615: Py3x fix.
  issue #615: update Changelog.
  issue #615: use FileService for target->controll file transfers
  issue #482: another Py3 fix
  ci: try removing exclude: to make Azure jobs work again
  compat: fix Py2.4 SyntaxError
  issue #482: remove 'ssh' from checked processes
  ci: Py3 fix
  issue #279: add one more test for max_message_size
  issue #482: ci: add stray process checks to all jobs
  tests: fix format string error
  core: MitogenProtocol.is_privileged was not set in children
  issue #482: tests: fail DockerMixin tests if stray processes exist
  docs: update Changelog.
  issue #586: update Changelog.
  docs: update Changelog.
  [security] core: undirectional routing wasn't respected in some cases
  docs: tidy up Select.all()
  issue #612: update Changelog.
  master: fix TypeError
  pkgutil: fix Python3 compatibility
  parent: use protocol for getting remote_id
  docs: merge signals.rst into internals.rst
  os_fork: do not attempt to cork the active thread.
  parent: fix get_log_level() for split out loggers.
  issue #547: fix service_test failures.
  issue #547: update Changelog.
  issue #547: core/service: race/deadlock-free service pool init
  docs: update Changelog.
  ...
@alexhexabeam

This comment has been minimized.

Copy link

commented Aug 20, 2019

I've confirmed the above playbook works great with Mitogen v0.2.8. Memory usage during the transfer stays stable regardless of the file size (tested with up to 20GB).

Thanks again for your great support, and for making this excellent tool.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
2 participants
You can’t perform that action at this time.