Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker Engine Memory leak #178

Closed
andresnatanael opened this issue Aug 8, 2016 · 123 comments
Closed

Docker Engine Memory leak #178

andresnatanael opened this issue Aug 8, 2016 · 123 comments

Comments

@andresnatanael
Copy link

Expected behavior

The process com.docker.hyperkit after killing all the docker containers need to free the used memory to the initial state (50 mb)

Actual behavior

After killing all the docker containers the process com.docker.hyperkit still using 3.49 GB.

Information

Diagnostic ID: EB6AFE2E-34AA-4617-B849-D79863CDC40C
Docker for Mac: 1.12.0 (Build 10871)
macOS: Version 10.11.6 (Build 15G31)
[OK] docker-cli
[OK] app
[OK] moby-syslog
[OK] disk
[OK] virtualization
[OK] system
[OK] menubar
[OK] osxfs
[OK] db
[OK] slirp
[OK] moby-console
[OK] logs
[OK] vmnetd
[OK] env
[OK] moby
[OK] driver.amd64-linux

Steps to reproduce

  1. ...
    docker-compose up -d

version: '2'
services:
student:
image: docker:dind
ports:
- "8000-8010"
privileged: true
volumes:
- /tmp/docker-training:/docker-training

  1. ...

docker-compose scale student=10

docker-compose down

4.-

docker ps -> no containers running, but consuming a lot of memory com.docker.hyperkit (3,49 gb)

Only restarting the docker engine VirtualMachine (using hyperkit) free the memory.

@ijc
Copy link
Contributor

ijc commented Aug 9, 2016

Thanks for your report. Unfortunately once the memory has been touched by the Linux kernel within the VM and therefore becomes populated RAM in the hyperkit process (by the usual OS page faulting mechanisms) there is no way for the guest kernel to then indicate back to the hypervisor and therefore the host when that RAM is free again and to turn those memory regions back into unallocated holes.

However since the RAM is unused in the guest it should not be touched by anything in the VM and therefore not by the hyperkit process either and therefore I would expect it to eventually get swapped out to disk in favour of keeping actual useful/active data for other processes in RAM, just as it would for any large but idle process.

It sounds like you are checking the virtual address size (vsz in ps output) of the hyperkit process rather than the resident set size (rss). The later should be shrinking for an idle hyperkit process as other processes request memory and hyperkit gets swapped out while the former basically only grows but does not necessarily represent use of actual RAM.

I'm afraid that compacting the vsz is basically a wont-fix issue here and I'm therefore going to close on that basis. If however you are observing the rss not shrinking (and there are other processes to create memory pressure i.e. the memory appears to be somehow locked into RAM and not swappable) then please do update this ticket with details of the rss memory patterns you are observing and we can reopen and investigate that angle further.

@eXamadeus
Copy link

eXamadeus commented Jun 29, 2018

Hey all. Sorry to revive a dead thread, but I would like this ticket to be reopened. After looking into it, the reserved memory for hyperkit process sits around 1.6-1.7 GB (based on my docker memory settings) even when trying to apply significant memory pressure. (This is of course without any running containers)

Output of docker info:

Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 56
Server Version: 18.03.1-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host ipvlan macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 773c489c9c1b21a6d78b5c538cd395416ec50f88
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
 seccomp
  Profile: default
Kernel Version: 4.9.87-linuxkit-aufs
Operating System: Docker for Mac
OSType: linux
Architecture: x86_64
CPUs: 3
Total Memory: 5.818GiB
Name: linuxkit-025000000001
ID: ZMMF:UZTZ:IHNE:2EIZ:TA4G:6P4L:SU3N:4OSK:UC5K:UM4M:DIKT:6ZXJ
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 22
 Goroutines: 41
 System Time: 2018-06-29T18:06:39.2425914Z
 EventsListeners: 2
HTTP Proxy: docker.for.mac.http.internal:3128
HTTPS Proxy: docker.for.mac.http.internal:3129
Registry: https://index.docker.io/v1/
Labels:
Experimental: true
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

I wrote a simple python script to store crap into memory, then I spam opened as many google chrome tabs as I could (best way to apply memory pressure I could think of). Here is the script I used:

import sys
import threading
from unittest import TestCase

items = []

class WastefulThread(threading.Thread):

    def run(self):

        for i in range(0, 10000000000):
            items.append(i)
            if i % 100000 == 0:
                sys.stdout.write('\r' + str(i))
                blah = items[0:i]

        for item in items:
            sys.stdout.write('\r{}'.format(item))


class WasteMemory(TestCase):

    def test_waste(self):
        for i in range(8):
            thread = WastefulThread(name='Thread-{}'.format(i))
            thread.start()

and here is a gif showing the memory usage of the top four processes sorted by reserved memory (RES):

memory

Here is a higher quality gifv link.

The memory of footprint of the com.docker.hyperkit process stays constant at >1600 MB. This is the RES footprint. Also, note that there is SIGNIFICANT memory pressure applied to the system, so much so that the current running "WasteMemory" process ends up dumping its memory to virtual (VIRT) quite a few times throughout the gif.

Here is an image of the memory after the wasteful process got to run a little longer...

snapshot

A larger span of the memory and swap usage:

memory and swap

TLDR: It appears that the com.docker.hyperkit process does not free up reserved memory, even if significant memory pressure is applied. I would like this issue to be reopened and investigated as this is a significant amount of memory being reserved for what should be an inactive process.

@eXamadeus
Copy link

I have also crawled around the related issues and haven't found anything that explains the reasons or points to a solution for this issue. Please let me know if I can provide any more information. I believe the GIF I provided doesn't show full memory reservation (roughly 16 GB); however, during other tests I ran it was almost 100% maxed through similar means and the RES portion of the com.docker.hyperkit process remained unchanged.

@mkohlmyr
Copy link

mkohlmyr commented Aug 8, 2018

Having docker idle in my toolbar lands it at 1.3GB of RAM I really don't understand how it is even possible. This is very shortly after booting, before using any containers.

@iddan
Copy link

iddan commented Sep 4, 2018

Can this be reopened please?

@kyprifog
Copy link

kyprifog commented Sep 8, 2018

I just fresh started my computer with no containers running and found hyperkit using 9GB of RAM. Then after restarting docker it was still using 3 GB. This needs to be reopened.

@gvbkr
Copy link

gvbkr commented Sep 15, 2018

Yes please reopen.. with no containers open, its ~5G of RAM being used. This is significantly impacting the avaialble resources and slowing down other things we need to run. A possible bug introduced in the latest version?

@AnnaKarinaNava
Copy link

same situation here, just restarted my laptop, no containers and com.docker.hyperkit ~2G of RAM

@vfontes
Copy link

vfontes commented Sep 20, 2018

Exact same situation here as well. Sitting at 2.61 GB without a single container running.

@ghost
Copy link

ghost commented Sep 21, 2018

2.91 GB here. top of the chart for me :) Just doesn't seem right.

@Mibeon
Copy link

Mibeon commented Sep 24, 2018

Something is going wrong.
I read all the comments and realized that also in my case the usage of about 1.4 GB (for nothing) is too much. I asked me lots of time why does docker need so much ram at blank running (event with no containers created).

Reopen this ticket, damn. The explanation at the top is not plausible enough.

@iMerica
Copy link

iMerica commented Sep 26, 2018

I'm seeing the same thing on Docker version 18.06.1-ce, build e68fc7a.

PSA: If you're reporting the same issue, please attach the version of Docker you're running using docker --version. Its just useless noise otherwise.

@penndsg
Copy link

penndsg commented Sep 26, 2018

Same problem, Docker version 18.06.0-ce, build 0ffa825, 1.8 GB with no containers running.

@ghost
Copy link

ghost commented Sep 27, 2018

I'm running at 5.05Gb now, brand new MacBook never even run a docker image on it. This can't be right.

Docker version 18.06.1-ce, build e68fc7a

Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: 18.06.1-ce Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Native Overlay Diff: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: 468a545b9edcd5932818eb9de8e72413e616e86e runc version: 69663f0bd4b60df09991c08812a60108003fa340 init version: fec3683 Security Options: seccomp Profile: default Kernel Version: 4.9.93-linuxkit-aufs Operating System: Docker for Mac OSType: linux Architecture: x86_64 CPUs: 6 Total Memory: 11.71GiB Name: linuxkit-025000000001 ID: ARNV:QEQI:MLN4:XLZ6:PFY7:44QK:XGEQ:GX6H:RZTY:6JAJ:CBBI:ZS6R Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): true File Descriptors: 24 Goroutines: 50 System Time: 2018-09-27T06:46:47.768576607Z EventsListeners: 2 HTTP Proxy: gateway.docker.internal:3128 HTTPS Proxy: gateway.docker.internal:3129 Registry: https://index.docker.io/v1/ Labels: Experimental: true Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false

@stroebs
Copy link

stroebs commented Sep 27, 2018

Same issue here on a fresh installation of MacOS Mojave. Docker-CE 18.06.1-ce-mac73 (26764)
Noticed my mac was lagging. Stopping Docker resolves the lag issue.

image
image
image

@fadelardi
Copy link

2 GB usage here. No containers running.
Docker for Mac ver.: Version 18.06.1-ce-mac73
MacOS: 10.12.6

@troyharvey
Copy link

🤔 no containers running

image

@daBayrus
Copy link

daBayrus commented Sep 28, 2018

no containers running.. OS: Mojave Version 18.06.1-ce-mac73 (26764)

image

@iMerica
Copy link

iMerica commented Sep 28, 2018

This is my call graph while Docker is completely idle (no containers running). Version info: Docker version 18.06.1-ce, build e68fc7a.

Physical footprint:         13.2G
Physical footprint (peak):  13.2G
----

Call graph:
    7795 Thread_2043441   DispatchQueue_1: com.apple.main-thread  (serial)
    + 7795 start  (in libdyld.dylib) + 1    
    +   7795 main  (in com.docker.hyperkit) + 10565    
    +     7795 kevent  (in libsystem_kernel.dylib) + 10    
    7795 Thread_2043442
    + 7795 thread_start  (in libsystem_pthread.dylib) + 13    
    +   7795 _pthread_start  (in libsystem_pthread.dylib) + 70    
    +     7795 _pthread_body  (in libsystem_pthread.dylib) + 126    
    +       7795 caml_thread_start  (in com.docker.hyperkit) + 104    
    +         7795 caml_start_program  (in com.docker.hyperkit) + 92    
    +           7795 camlThread__fun_1562  (in com.docker.hyperkit) + 137    
    +             7795 camlLwt_main__run_1327  (in com.docker.hyperkit) + 156    
    +               7795 camlLwt_engine__fun_2951  (in com.docker.hyperkit) + 442    
    +                 7795 camlLwt_engine__fun_3012  (in com.docker.hyperkit) + 35    
    +                   7795 unix_select  (in com.docker.hyperkit) + 661    
    +                     7795 __select  (in libsystem_kernel.dylib) + 10    
    7795 Thread_2043443
    + 7795 thread_start  (in libsystem_pthread.dylib) + 13    
    +   7795 _pthread_start  (in libsystem_pthread.dylib) + 70    
    +     7795 _pthread_body  (in libsystem_pthread.dylib) + 126    
    +       7792 caml_thread_tick  (in com.docker.hyperkit) + 79    
    +       ! 7792 __select  (in libsystem_kernel.dylib) + 10    
    +       2 caml_thread_tick  (in com.docker.hyperkit) + 84    
    +       1 caml_thread_tick  (in com.docker.hyperkit) + 89    
    +         1 caml_record_signal  (in com.docker.hyperkit) + 0    
    7795 Thread_2043444
    + 7795 thread_start  (in libsystem_pthread.dylib) + 13    
    +   7795 _pthread_start  (in libsystem_pthread.dylib) + 70    
    +     7795 _pthread_body  (in libsystem_pthread.dylib) + 126    
    +       7795 worker_loop  (in com.docker.hyperkit) + 219    
    +         7795 _pthread_cond_wait  (in libsystem_pthread.dylib) + 724    
    +           7795 __psynch_cvwait  (in libsystem_kernel.dylib) + 10    
    7795 Thread_2043445: callout
    + 7795 thread_start  (in libsystem_pthread.dylib) + 13    
    +   7795 _pthread_start  (in libsystem_pthread.dylib) + 70    
    +     7795 _pthread_body  (in libsystem_pthread.dylib) + 126    
    +       7752 callout_thread_func  (in com.docker.hyperkit) + 169    
    +       ! 7739 _pthread_cond_wait  (in libsystem_pthread.dylib) + 775    
    +       ! : 7732 __psynch_cvwait  (in libsystem_kernel.dylib) + 10,12    
    +       ! : 7 cerror_nocancel  (in libsystem_kernel.dylib) + 20,6,...    
    +       ! 12 _pthread_cond_wait  (in libsystem_pthread.dylib) + 793,0,...    
    +       ! 1 DYLD-STUB$$__error  (in libsystem_pthread.dylib) + 0    
    +       32 callout_thread_func  (in com.docker.hyperkit) + 217    
    +       ! 32 vlapic_callout_handler  (in com.docker.hyperkit) + 92    
    +       !   30 vcpu_notify_event  (in com.docker.hyperkit) + 104    
    +       !   : 29 pthread_cond_signal  (in libsystem_pthread.dylib) + 488    
    +       !   : | 29 __psynch_cvsignal  (in libsystem_kernel.dylib) + 10    
    +       !   : 1 pthread_cond_signal  (in libsystem_pthread.dylib) + 515    
    +       !   :   1 _pthread_cond_updateval  (in libsystem_pthread.dylib) + 1    
    +       !   2 vcpu_notify_event  (in com.docker.hyperkit) + 112    
    +       !     2 pthread_mutex_unlock  (in libsystem_pthread.dylib) + 0,89    
    +       5 callout_thread_func  (in com.docker.hyperkit) + 70,155,...    
    +       4 callout_thread_func  (in com.docker.hyperkit) + 89    
    +       ! 4 mach_absolute_time  (in libsystem_kernel.dylib) + 18,28    
    +       2 callout_thread_func  (in com.docker.hyperkit) + 225    
    +         2 pthread_mutex_lock  (in libsystem_pthread.dylib) + 0,99    
    7795 Thread_2043446: net:ipc:tx
    + 7795 thread_start  (in libsystem_pthread.dylib) + 13    
    +   7795 _pthread_start  (in libsystem_pthread.dylib) + 70    
    +     7795 _pthread_body  (in libsystem_pthread.dylib) + 126    
    +       7795 pci_vtnet_tx_thread.897  (in com.docker.hyperkit) + 600    
    +         7795 _pthread_cond_wait  (in libsystem_pthread.dylib) + 724    
    +           7795 __psynch_cvwait  (in libsystem_kernel.dylib) + 10    
    7795 Thread_2043447: net:ipc:rx
    + 7795 thread_start  (in libsystem_pthread.dylib) + 13    
    +   7795 _pthread_start  (in libsystem_pthread.dylib) + 70    
    +     7795 _pthread_body  (in libsystem_pthread.dylib) + 126    
    +       7795 pci_vtnet_tap_select_func.898  (in com.docker.hyperkit) + 310    
    +         7795 __select  (in libsystem_kernel.dylib) + 10    
    7795 Thread_2043448: blk:2:0
    + 7795 thread_start  (in libsystem_pthread.dylib) + 13    
    +   7795 _pthread_start  (in libsystem_pthread.dylib) + 70    
    +     7795 _pthread_body  (in libsystem_pthread.dylib) + 126    
    +       7795 blockif_thr  (in com.docker.hyperkit) + 149    
    +         7795 _pthread_cond_wait  (in libsystem_pthread.dylib) + 724    
    +           7795 __psynch_cvwait  (in libsystem_kernel.dylib) + 10    
    7795 Thread_2043449: vsock:tx
    + 7795 thread_start  (in libsystem_pthread.dylib) + 13    
    +   7795 _pthread_start  (in libsystem_pthread.dylib) + 70    
    +     7795 _pthread_body  (in libsystem_pthread.dylib) + 126    
    +       7795 pci_vtsock_tx_thread  (in com.docker.hyperkit) + 150    
    +         7795 xselect  (in com.docker.hyperkit) + 32    
    +           7795 __select  (in libsystem_kernel.dylib) + 10    
    7795 Thread_2043450: vsock:rx
    + 7795 thread_start  (in libsystem_pthread.dylib) + 13    
    +   7795 _pthread_start  (in libsystem_pthread.dylib) + 70    
    +     7795 _pthread_body  (in libsystem_pthread.dylib) + 126    
    +       7795 pci_vtsock_rx_thread  (in com.docker.hyperkit) + 146    
    +         7795 xselect  (in com.docker.hyperkit) + 32    
    +           7795 __select  (in libsystem_kernel.dylib) + 10    
    7795 Thread_2043451: blk:4:0
    + 7795 thread_start  (in libsystem_pthread.dylib) + 13    
    +   7795 _pthread_start  (in libsystem_pthread.dylib) + 70    
    +     7795 _pthread_body  (in libsystem_pthread.dylib) + 126    
    +       7795 blockif_thr  (in com.docker.hyperkit) + 149    
    +         7795 _pthread_cond_wait  (in libsystem_pthread.dylib) + 724    
    +           7795 __psynch_cvwait  (in libsystem_kernel.dylib) + 10    
    7795 Thread_2043452: blk:5:0
    + 7795 thread_start  (in libsystem_pthread.dylib) + 13    
    +   7795 _pthread_start  (in libsystem_pthread.dylib) + 70    
    +     7795 _pthread_body  (in libsystem_pthread.dylib) + 126    
    +       7795 blockif_thr  (in com.docker.hyperkit) + 149    
    +         7795 _pthread_cond_wait  (in libsystem_pthread.dylib) + 724    
    +           7795 __psynch_cvwait  (in libsystem_kernel.dylib) + 10    
    7795 Thread_2043453
    + 7795 start_wqthread  (in libsystem_pthread.dylib) + 13    
    +   7795 _pthread_wqthread  (in libsystem_pthread.dylib) + 670    
    +     7795 __workq_kernreturn  (in libsystem_kernel.dylib) + 10    
    7795 Thread_2043454: vcpu:0
    + 7795 thread_start  (in libsystem_pthread.dylib) + 13    
    +   7795 _pthread_start  (in libsystem_pthread.dylib) + 70    
    +     7795 _pthread_body  (in libsystem_pthread.dylib) + 126    
    +       7795 vcpu_thread  (in com.docker.hyperkit) + 1213    
    +         7737 xh_vm_run  (in com.docker.hyperkit) + 1335    
    +         ! 7736 _pthread_cond_wait  (in libsystem_pthread.dylib) + 775    
    +         ! : 7736 __psynch_cvwait  (in libsystem_kernel.dylib) + 10    
    +         ! 1 _pthread_cond_wait  (in libsystem_pthread.dylib) + 860    
    +         45 xh_vm_run  (in com.docker.hyperkit) + 343    
    +         ! 44 vmx_run  (in com.docker.hyperkit) + 1718    
    +         ! : 44 hv_vcpu_run  (in Hypervisor) + 13    
    +         ! 1 vmx_run  (in com.docker.hyperkit) + 1790    
    +         !   1 vcpu_read_vmcs_id  (in Hypervisor) + 0    
    +         8 xh_vm_run  (in com.docker.hyperkit) + 3704    
    +         ! 4 vmm_emulate_instruction  (in com.docker.hyperkit) + 3228    
    +         ! : 3 vlapic_write  (in com.docker.hyperkit) + 379    
    +         ! : | 3 vlapic_icrtmr_write_handler  (in com.docker.hyperkit) + 285    
    +         ! : |   3 pthread_cond_signal  (in libsystem_pthread.dylib) + 488    
    +         ! : |     3 __psynch_cvsignal  (in libsystem_kernel.dylib) + 10    
    +         ! : 1 vlapic_icrlo_write_handler  (in com.docker.hyperkit) + 586    
    +         ! :   1 vcpu_notify_event  (in com.docker.hyperkit) + 104    
    +         ! :     1 pthread_cond_signal  (in libsystem_pthread.dylib) + 488    
    +         ! :       1 __psynch_cvsignal  (in libsystem_kernel.dylib) + 10    
    +         ! 2 vmm_emulate_instruction  (in com.docker.hyperkit) + 3076    
    +         ! : 1 vhpet_mmio_read  (in com.docker.hyperkit) + 77    
    +         ! : | 1 pthread_mutex_unlock  (in libsystem_pthread.dylib) + 89    
    +         ! : 1 vhpet_mmio_read  (in com.docker.hyperkit) + 169    
    +         ! :   1 vhpet_counter  (in com.docker.hyperkit) + 77    
    +         ! :     1 mach_absolute_time  (in libsystem_kernel.dylib) + 28    
    +         ! 1 vmm_emulate_instruction  (in com.docker.hyperkit) + 2554    
    +         ! : 1 vie_update_register  (in com.docker.hyperkit) + 134    
    +         ! :   1 vm_set_register  (in com.docker.hyperkit) + 51    
    +         ! :     1 vmx_setreg  (in com.docker.hyperkit) + 272    
    +         ! 1 vmm_emulate_instruction  (in com.docker.hyperkit) + 3126    
    +         2 xh_vm_run  (in com.docker.hyperkit) + 362    
    +         ! 1 vcpu_require_state  (in com.docker.hyperkit) + 15    
    +         ! : 1 vcpu_set_state  (in com.docker.hyperkit) + 50    
    +         ! :   1 pthread_mutex_lock  (in libsystem_pthread.dylib) + 7    
    +         ! 1 vcpu_require_state  (in com.docker.hyperkit) + 0    
    +         2 xh_vm_run  (in com.docker.hyperkit) + 346,3149    
    +         1 xh_vm_run  (in com.docker.hyperkit) + 314    
    +           1 vcpu_require_state  (in com.docker.hyperkit) + 15    
    +             1 vcpu_set_state  (in com.docker.hyperkit) + 0    
    7795 Thread_2043473: vcpu:1
    + 7795 thread_start  (in libsystem_pthread.dylib) + 13    
    +   7795 _pthread_start  (in libsystem_pthread.dylib) + 70    
    +     7795 _pthread_body  (in libsystem_pthread.dylib) + 126    
    +       7795 vcpu_thread  (in com.docker.hyperkit) + 1213    
    +         7760 xh_vm_run  (in com.docker.hyperkit) + 1335    
    +         ! 7760 _pthread_cond_wait  (in libsystem_pthread.dylib) + 775    
    +         !   7760 __psynch_cvwait  (in libsystem_kernel.dylib) + 10    
    +         33 xh_vm_run  (in com.docker.hyperkit) + 343    
    +         ! 32 vmx_run  (in com.docker.hyperkit) + 1718    
    +         ! : 32 hv_vcpu_run  (in Hypervisor) + 13    
    +         ! 1 vmx_run  (in com.docker.hyperkit) + 1721    
    +         1 xh_vm_run  (in com.docker.hyperkit) + 3704    
    +         ! 1 vmm_emulate_instruction  (in com.docker.hyperkit) + 3228    
    +         !   1 vlapic_icrlo_write_handler  (in com.docker.hyperkit) + 586    
    +         !     1 vcpu_notify_event  (in com.docker.hyperkit) + 104    
    +         !       1 pthread_cond_signal  (in libsystem_pthread.dylib) + 488    
    +         !         1 __psynch_cvsignal  (in libsystem_kernel.dylib) + 10    
    +         1 xh_vm_run  (in com.docker.hyperkit) + 2682    
    7795 Thread_2043474: vcpu:2
    + 7795 thread_start  (in libsystem_pthread.dylib) + 13    
    +   7795 _pthread_start  (in libsystem_pthread.dylib) + 70    
    +     7795 _pthread_body  (in libsystem_pthread.dylib) + 126    
    +       7795 vcpu_thread  (in com.docker.hyperkit) + 1213    
    +         7774 xh_vm_run  (in com.docker.hyperkit) + 1335    
    +         ! 7773 _pthread_cond_wait  (in libsystem_pthread.dylib) + 775    
    +         ! : 7773 __psynch_cvwait  (in libsystem_kernel.dylib) + 10    
    +         ! 1 _pthread_cond_wait  (in libsystem_pthread.dylib) + 860    
    +         19 xh_vm_run  (in com.docker.hyperkit) + 343    
    +         ! 19 vmx_run  (in com.docker.hyperkit) + 1718    
    +         !   19 hv_vcpu_run  (in Hypervisor) + 13    
    +         1 xh_vm_run  (in com.docker.hyperkit) + 1250    
    +         ! 1 vlapic_pending_intr  (in com.docker.hyperkit) + 18    
    +         1 xh_vm_run  (in com.docker.hyperkit) + 3704    
    +           1 vmm_emulate_instruction  (in com.docker.hyperkit) + 3228    
    +             1 vlapic_write  (in com.docker.hyperkit) + 379    
    +               1 vlapic_icrtmr_write_handler  (in com.docker.hyperkit) + 285    
    +                 1 pthread_cond_signal  (in libsystem_pthread.dylib) + 488    
    +                   1 __psynch_cvsignal  (in libsystem_kernel.dylib) + 10    
    7795 Thread_2043475: vcpu:3
    + 7795 thread_start  (in libsystem_pthread.dylib) + 13    
    +   7795 _pthread_start  (in libsystem_pthread.dylib) + 70    
    +     7795 _pthread_body  (in libsystem_pthread.dylib) + 126    
    +       7795 vcpu_thread  (in com.docker.hyperkit) + 1213    
    +         7759 xh_vm_run  (in com.docker.hyperkit) + 1335    
    +         ! 7758 _pthread_cond_wait  (in libsystem_pthread.dylib) + 775    
    +         ! : 7758 __psynch_cvwait  (in libsystem_kernel.dylib) + 10    
    +         ! 1 _pthread_cond_wait  (in libsystem_pthread.dylib) + 860    
    +         30 xh_vm_run  (in com.docker.hyperkit) + 343    
    +         ! 29 vmx_run  (in com.docker.hyperkit) + 1718    
    +         ! : 29 hv_vcpu_run  (in Hypervisor) + 13    
    +         ! 1 vmx_run  (in com.docker.hyperkit) + 131    
    +         !   1 hv_vmx_vcpu_write_vmcs  (in Hypervisor) + 0    
    +         3 xh_vm_run  (in com.docker.hyperkit) + 3704    
    +         ! 3 vmm_emulate_instruction  (in com.docker.hyperkit) + 3228    
    +         !   2 vlapic_write  (in com.docker.hyperkit) + 379    
    +         !   : 2 vlapic_icrtmr_write_handler  (in com.docker.hyperkit) + 285    
    +         !   :   2 pthread_cond_signal  (in libsystem_pthread.dylib) + 488    
    +         !   :     2 __psynch_cvsignal  (in libsystem_kernel.dylib) + 10    
    +         !   1 vlapic_icrlo_write_handler  (in com.docker.hyperkit) + 586    
    +         !     1 vcpu_notify_event  (in com.docker.hyperkit) + 46    
    +         !       1 pthread_mutex_lock  (in libsystem_pthread.dylib) + 99    
    +         1 xh_vm_run  (in com.docker.hyperkit) + 314    
    +         ! 1 vcpu_require_state  (in com.docker.hyperkit) + 15    
    +         !   1 vcpu_set_state  (in com.docker.hyperkit) + 50    
    +         !     1 pthread_mutex_lock  (in libsystem_pthread.dylib) + 7    
    +         1 xh_vm_run  (in com.docker.hyperkit) + 1629    
    +         ! 1 vm_copy_setup  (in com.docker.hyperkit) + 131    
    +         !   1 vm_gla2gpa  (in com.docker.hyperkit) + 609    
    +         1 xh_vm_run  (in com.docker.hyperkit) + 318    
    7795 Thread_2043731: 9p:port
      7795 thread_start  (in libsystem_pthread.dylib) + 13    
        7795 _pthread_start  (in libsystem_pthread.dylib) + 70    
          7795 _pthread_body  (in libsystem_pthread.dylib) + 126    
            7795 pci_vt9p_thread  (in com.docker.hyperkit) + 816    
              7795 read  (in libsystem_kernel.dylib) + 10    

Total number in stack (recursive counted multiple, when >=5):
        16       _pthread_body  (in libsystem_pthread.dylib) + 126    
        16       _pthread_start  (in libsystem_pthread.dylib) + 70    
        16       thread_start  (in libsystem_pthread.dylib) + 13    
        10       __psynch_cvwait  (in libsystem_kernel.dylib) + 0    
        6       __psynch_cvsignal  (in libsystem_kernel.dylib) + 0    
        6       pthread_cond_signal  (in libsystem_pthread.dylib) + 488    
        5       __select  (in libsystem_kernel.dylib) + 0    
        5       _pthread_cond_wait  (in libsystem_pthread.dylib) + 724    
        5       _pthread_cond_wait  (in libsystem_pthread.dylib) + 775    

Sort by top of stack, same collapsed (when >= 5):
        __psynch_cvwait  (in libsystem_kernel.dylib)        77734
        __select  (in libsystem_kernel.dylib)        38972
        __workq_kernreturn  (in libsystem_kernel.dylib)        7795
        kevent  (in libsystem_kernel.dylib)        7795
        read  (in libsystem_kernel.dylib)        7795
        hv_vcpu_run  (in Hypervisor)        124
        __psynch_cvsignal  (in libsystem_kernel.dylib)        37
        _pthread_cond_wait  (in libsystem_pthread.dylib)        15
        cerror_nocancel  (in libsystem_kernel.dylib)        7
        callout_thread_func  (in com.docker.hyperkit)        5
        mach_absolute_time  (in libsystem_kernel.dylib)        5
        pthread_mutex_lock  (in libsystem_pthread.dylib)        5

@EduardRakov
Copy link

cpu is loaded too much as well as ram and my macbook lagging :( this problem was appeared after updating from high sierra to mojave. It's really annoying, folks, do you have maybe some thoughts/estimates when the problem can be fixed?

@madskonradsen
Copy link

@kyprifog Because it, as stated by ijc in the first comment, is working as intended. I agree that it's pretty bad. The issue stems from hyperkit, and Macs aren't usually used as production-servers, so they can't be bothered to make it any better then it is right now.

@ajpinedam
Copy link

+1

1 similar comment
@nicolapalavecino
Copy link

+1

@jacoscaz
Copy link

jacoscaz commented Jan 7, 2020

Personally I've migrated to maintaining dedicated start/stop scripts for most of the projects I work on. The majority of the containers I used to deal with were reasonably easy to replace with brew-based dependency management and pm2-based process management. pm2's save/resurrect commands are particularly helpful in this regard. Maintaining these scripts takes time but, particularly for local development environments, not having to deal with Docker's memory issues multiple times a day makes it worth it. Has anyone looked into alternatives to Docker? A few colleagues of mine have migrated to linux-based work machines, others have switched to running Docker inside a Linux VM (which is also what Docker for Mac does internally). Is there a different container runtime that runs on Mac OS natively and is compatible with Docker containers and the Docker registry?

@kubal5003
Copy link

It seems that this issue shows the attitude of the Docker team towards one of the most critical issues ever. Instead of working on a solution - even if that involves hyperkit & possibly even Apple itself it the issue goes deeper to the OS level - what does Docker team do? Absolutely nothing. Problem is unresolved for four years now.

@bicepjai
Copy link

bicepjai commented Mar 4, 2020

Please reopen this issue.

@vikaschoudhary16
Copy link

open the issue please

@vicky-holcomb
Copy link

+1 also seeing this issue. After a fresh restart, not running any containers, docker is using 4.36GB of memory.

@sqanet
Copy link

sqanet commented Apr 11, 2020

I installed latest Docker desktop on Mac - Catalina - I always quit docker if I am not using it, otherwise about 3GB ram is consumed even with no containers

@Roxiun
Copy link

Roxiun commented Apr 17, 2020

+1 Please Reopen the issue

@karloskalcium
Copy link

karloskalcium commented Apr 21, 2020

+1 would love if the team would focus on a fix; I'm seeing this on OSX mojave. 3.5GB memory and 5.6% cpu, and I'm just running a single tiny container that is mostly sleeping.

Docker version 19.03.8, build afacb8b

@gaisho
Copy link

gaisho commented Apr 24, 2020

Same. Please reopen this issue. I'm running at 3.7GB memory on a simple Wordpress application.

@kaueburiti
Copy link

+1 Please Reopen the issue, it's a lot of memory ):

@ghost
Copy link

ghost commented Apr 30, 2020

+1 please solve this

@seanturner026
Copy link

No containers running and 2.43gb of memory consumed.

osx 10.14.6
docker desktop 2.2.0.5

@astrostl
Copy link

astrostl commented May 9, 2020

TLDR: Docker Desktop for Mac initially consumes so much memory because it is running a VM in the background [1], and it appears to hold everything it ever consumes (but actually doesn't) because of an apparent bug in memory reporting in Mac OS X itself [2]

[1] Docker Desktop for Mac runs a Linux VM via https://github.com/moby/hyperkit , as documented at https://docs.docker.com/docker-for-mac/docker-toolbox/ . This is how you can run Linux containers on Mac, it is passing them off to the daemon running there. That VM alone consumes 2GB of memory by default, although it can be pulled back to 1GB in preferences.

[2] https://docs.google.com/document/d/17ZiQC1Tp9iH320K-uqVLyiJmk4DHJ3c4zgQetJiKYQM/edit has extensive documentation and testing. Critically: "The system is able to recover memory from Docker Desktop and give it to other processes if the system is under memory pressure. This is not reflected in the headline memory figure in the Activity Monitor, but can be seen by looking at the Real Memory." and "Since the qemu and hyperkit processes show the same double-accounting problem in Activity Monitor despite them being completely separate codebases, we conclude the bug must be in macOS itself, probably the Hypervisor.framework. We have reported the bug to Apple as bug number 48714544."

@benspaul
Copy link

please fix help

@eXamadeus
Copy link

eXamadeus commented May 17, 2020

TLDR: Docker Desktop for Mac initially consumes so much memory because it is running a VM in the background [1], and it appears to hold everything it ever consumes (but actually doesn't) because of an apparent bug in memory reporting in Mac OS X itself [2]

[1] Docker Desktop for Mac runs a Linux VM via https://github.com/moby/hyperkit , as documented at https://docs.docker.com/docker-for-mac/docker-toolbox/ . This is how you can run Linux containers on Mac, it is passing them off to the daemon running there. That VM alone consumes 2GB of memory by default, although it can be pulled back to 1GB in preferences.

[2] https://docs.google.com/document/d/17ZiQC1Tp9iH320K-uqVLyiJmk4DHJ3c4zgQetJiKYQM/edit has extensive documentation and testing. Critically: "The system is able to recover memory from Docker Desktop and give it to other processes if the system is under memory pressure. This is not reflected in the headline memory figure in the Activity Monitor, but can be seen by looking at the Real Memory." and "Since the qemu and hyperkit processes show the same double-accounting problem in Activity Monitor despite them being completely separate codebases, we conclude the bug must be in macOS itself, probably the Hypervisor.framework. We have reported the bug to Apple as bug number 48714544."

/sigh

This "conclusion" again fails to address the concerns I posted here: #178 (comment).

THE RESERVED MEMORY IS STILL FAR TOO HIGH FOR AN IDLE PROCESS. It never dips below 1.6 GB in my testing and this has NOTHING to do with the double reporting bug. I'm aware of the distinctions between Virtual Memory and Reserved Memory, so a bug in VIRT reporting is great to know about, but I couldn't care less as it isn't relevant. In my examples, RES memory is right there for show and it doesn't go down AT ALL from first boot. Watch the GIF again.

This "virtual memory is double reported" is a distraction. This is NOT RELATED to the issue I'm reporting. We all know virtual memory doesn't matter here. But the reserved memory not being freed up? That's a problem.

Honestly, I need to open a new issue with a link to this thread. Since the issue I'm seeing isn't a memory leak. It's a lack of freeing memory when pressure is applied...

@astrostl
Copy link

astrostl commented May 17, 2020

Are you considering the fact that a Linux VM with a minimum of 1GB of memory (and a current default of 2GB) hard-allocated to it is running? That's the [1] part of the message.

Even if the VM isn't consuming its max, it's normal behavior for hypervisors in general to claim and not release the max for guest VMs. They don't have the OS-level integration with the guest that would be required to "know" how much was actually needed or consumed by default, and even if they did it could be dangerous for stability to dynamically allocate because a surge could send the host into swap or worse depending on how close to its capacity it already was.

When you are on Linux, the Docker daemon is "a process" because containers will reuse the Linux kernel you're already running. On Mac and Windows that isn't possible, so it's more than a process - it's a Linux VM with a Docker process running.

@zafercavdar
Copy link

No container is running and 11GB ram is allocated by Docker.
Screen Shot 2020-06-18 at 15 21 38

@mohitt008
Copy link

I'm facing the same issue.

@fredsherburne-splunk
Copy link

I am facing the same issue. 2 GB memory usage right after starting up my machine.

@deboracosilveira
Copy link

I also facing this issue, almost 3GB here!

@georgezim85
Copy link

georgezim85 commented Jul 1, 2020

Try to use just specific project directories (for volumes) in Resources -> File Sharing.
I noticed that if I put my entire home directory, it takes much more memory.

My com.docker.hyperkit process is using 1,86 GB of memory.

@shahrukh-alizai
Copy link

OS: macOS Catalina 10.15.5
Docker: 2.3.0.3 (45519)
Channel: Stable
Engine: 19.03.8

No containers running. Swap size 512 MB.

Memory usage: 8.33 GB

Any solution?

image

@yogeshmurugesan
Copy link

I also face the same issue, No containers are running
Still consumes more then 3 GB of memory (Docker version 19.03.8, build afacb8b)

@eXamadeus
Copy link

Are you considering the fact that a Linux VM with a minimum of 1GB of memory (and a current default of 2GB) hard-allocated to it is running? That's the [1] part of the message.

Actually, it seems I missed that part when I was originally replying to your comment. That would explain the constant reserved memory readings I have seen. Uggh, that's a lot of overhead to have running constantly. It would be nice if the docker daemon was smart enough to spin down this VM when there are no containers running. Sure it would result in slower startups, but that would be a nice configuration option. Something like "memory conscious-mode" or something that you can opt into that will result in slightly longer spin ups for the first running container you launch because it needs to boot the hyperkit. It could even have a configurable decay time before the hyperkit gets spun back down (once the last running container is stopped).

Anyway just a thought. It certainly would make leaving the docker daemon on in the background a lot easier for me.

@docker-robott
Copy link
Collaborator

Closed issues are locked after 30 days of inactivity.
This helps our team focus on active issues.

If you have found a problem that seems similar to this, please open a new issue.

Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows.
/lifecycle locked

@docker docker locked and limited conversation to collaborators Aug 12, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests