New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker Engine Memory leak #178

Closed
andresnatanael opened this Issue Aug 8, 2016 · 57 comments

Comments

Projects
None yet
@andresnatanael

andresnatanael commented Aug 8, 2016

Expected behavior

The process com.docker.hyperkit after killing all the docker containers need to free the used memory to the initial state (50 mb)

Actual behavior

After killing all the docker containers the process com.docker.hyperkit still using 3.49 GB.

Information

Diagnostic ID: EB6AFE2E-34AA-4617-B849-D79863CDC40C
Docker for Mac: 1.12.0 (Build 10871)
macOS: Version 10.11.6 (Build 15G31)
[OK] docker-cli
[OK] app
[OK] moby-syslog
[OK] disk
[OK] virtualization
[OK] system
[OK] menubar
[OK] osxfs
[OK] db
[OK] slirp
[OK] moby-console
[OK] logs
[OK] vmnetd
[OK] env
[OK] moby
[OK] driver.amd64-linux

Steps to reproduce

  1. ...
    docker-compose up -d

version: '2'
services:
student:
image: docker:dind
ports:
- "8000-8010"
privileged: true
volumes:
- /tmp/docker-training:/docker-training

  1. ...

docker-compose scale student=10

docker-compose down

4.-

docker ps -> no containers running, but consuming a lot of memory com.docker.hyperkit (3,49 gb)

Only restarting the docker engine VirtualMachine (using hyperkit) free the memory.

@ijc

This comment has been minimized.

Contributor

ijc commented Aug 9, 2016

Thanks for your report. Unfortunately once the memory has been touched by the Linux kernel within the VM and therefore becomes populated RAM in the hyperkit process (by the usual OS page faulting mechanisms) there is no way for the guest kernel to then indicate back to the hypervisor and therefore the host when that RAM is free again and to turn those memory regions back into unallocated holes.

However since the RAM is unused in the guest it should not be touched by anything in the VM and therefore not by the hyperkit process either and therefore I would expect it to eventually get swapped out to disk in favour of keeping actual useful/active data for other processes in RAM, just as it would for any large but idle process.

It sounds like you are checking the virtual address size (vsz in ps output) of the hyperkit process rather than the resident set size (rss). The later should be shrinking for an idle hyperkit process as other processes request memory and hyperkit gets swapped out while the former basically only grows but does not necessarily represent use of actual RAM.

I'm afraid that compacting the vsz is basically a wont-fix issue here and I'm therefore going to close on that basis. If however you are observing the rss not shrinking (and there are other processes to create memory pressure i.e. the memory appears to be somehow locked into RAM and not swappable) then please do update this ticket with details of the rss memory patterns you are observing and we can reopen and investigate that angle further.

@eXamadeus

This comment has been minimized.

eXamadeus commented Jun 29, 2018

Hey all. Sorry to revive a dead thread, but I would like this ticket to be reopened. After looking into it, the reserved memory for hyperkit process sits around 1.6-1.7 GB (based on my docker memory settings) even when trying to apply significant memory pressure. (This is of course without any running containers)

Output of docker info:

Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 56
Server Version: 18.03.1-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host ipvlan macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 773c489c9c1b21a6d78b5c538cd395416ec50f88
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
 seccomp
  Profile: default
Kernel Version: 4.9.87-linuxkit-aufs
Operating System: Docker for Mac
OSType: linux
Architecture: x86_64
CPUs: 3
Total Memory: 5.818GiB
Name: linuxkit-025000000001
ID: ZMMF:UZTZ:IHNE:2EIZ:TA4G:6P4L:SU3N:4OSK:UC5K:UM4M:DIKT:6ZXJ
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 22
 Goroutines: 41
 System Time: 2018-06-29T18:06:39.2425914Z
 EventsListeners: 2
HTTP Proxy: docker.for.mac.http.internal:3128
HTTPS Proxy: docker.for.mac.http.internal:3129
Registry: https://index.docker.io/v1/
Labels:
Experimental: true
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

I wrote a simple python script to store crap into memory, then I spam opened as many google chrome tabs as I could (best way to apply memory pressure I could think of). Here is the script I used:

import sys
import threading
from unittest import TestCase

items = []

class WastefulThread(threading.Thread):

    def run(self):

        for i in range(0, 10000000000):
            items.append(i)
            if i % 100000 == 0:
                sys.stdout.write('\r' + str(i))
                blah = items[0:i]

        for item in items:
            sys.stdout.write('\r{}'.format(item))


class WasteMemory(TestCase):

    def test_waste(self):
        for i in range(8):
            thread = WastefulThread(name='Thread-{}'.format(i))
            thread.start()

and here is a gif showing the memory usage of the top four processes sorted by reserved memory (RES):

memory

Here is a higher quality gifv link.

The memory of footprint of the com.docker.hyperkit process stays constant at >1600 MB. This is the RES footprint. Also, note that there is SIGNIFICANT memory pressure applied to the system, so much so that the current running "WasteMemory" process ends up dumping its memory to virtual (VIRT) quite a few times throughout the gif.

Here is an image of the memory after the wasteful process got to run a little longer...

snapshot

A larger span of the memory and swap usage:

memory and swap

TLDR: It appears that the com.docker.hyperkit process does not free up reserved memory, even if significant memory pressure is applied. I would like this issue to be reopened and investigated as this is a significant amount of memory being reserved for what should be an inactive process.

@eXamadeus

This comment has been minimized.

eXamadeus commented Jun 29, 2018

I have also crawled around the related issues and haven't found anything that explains the reasons or points to a solution for this issue. Please let me know if I can provide any more information. I believe the GIF I provided doesn't show full memory reservation (roughly 16 GB); however, during other tests I ran it was almost 100% maxed through similar means and the RES portion of the com.docker.hyperkit process remained unchanged.

@mkohlmyr

This comment has been minimized.

mkohlmyr commented Aug 8, 2018

Having docker idle in my toolbar lands it at 1.3GB of RAM I really don't understand how it is even possible. This is very shortly after booting, before using any containers.

@iddan

This comment has been minimized.

iddan commented Sep 4, 2018

Can this be reopened please?

@kyprifog

This comment has been minimized.

kyprifog commented Sep 8, 2018

I just fresh started my computer with no containers running and found hyperkit using 9GB of RAM. Then after restarting docker it was still using 3 GB. This needs to be reopened.

@gvbkr

This comment has been minimized.

gvbkr commented Sep 15, 2018

Yes please reopen.. with no containers open, its ~5G of RAM being used. This is significantly impacting the avaialble resources and slowing down other things we need to run. A possible bug introduced in the latest version?

@AnnaKarinaNava

This comment has been minimized.

AnnaKarinaNava commented Sep 18, 2018

same situation here, just restarted my laptop, no containers and com.docker.hyperkit ~2G of RAM

@vfontes

This comment has been minimized.

vfontes commented Sep 20, 2018

Exact same situation here as well. Sitting at 2.61 GB without a single container running.

@ICT22

This comment has been minimized.

ICT22 commented Sep 21, 2018

2.91 GB here. top of the chart for me :) Just doesn't seem right.

@Mibeon

This comment has been minimized.

Mibeon commented Sep 24, 2018

Something is going wrong.
I read all the comments and realized that also in my case the usage of about 1.4 GB (for nothing) is too much. I asked me lots of time why does docker need so much ram at blank running (event with no containers created).

Reopen this ticket, damn. The explanation at the top is not plausible enough.

@iMerica

This comment has been minimized.

iMerica commented Sep 26, 2018

I'm seeing the same thing on Docker version 18.06.1-ce, build e68fc7a.

PSA: If you're reporting the same issue, please attach the version of Docker you're running using docker --version. Its just useless noise otherwise.

@penndsg

This comment has been minimized.

penndsg commented Sep 26, 2018

Same problem, Docker version 18.06.0-ce, build 0ffa825, 1.8 GB with no containers running.

@ICT22

This comment has been minimized.

ICT22 commented Sep 27, 2018

I'm running at 5.05Gb now, brand new MacBook never even run a docker image on it. This can't be right.

Docker version 18.06.1-ce, build e68fc7a

Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: 18.06.1-ce Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Native Overlay Diff: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: 468a545b9edcd5932818eb9de8e72413e616e86e runc version: 69663f0bd4b60df09991c08812a60108003fa340 init version: fec3683 Security Options: seccomp Profile: default Kernel Version: 4.9.93-linuxkit-aufs Operating System: Docker for Mac OSType: linux Architecture: x86_64 CPUs: 6 Total Memory: 11.71GiB Name: linuxkit-025000000001 ID: ARNV:QEQI:MLN4:XLZ6:PFY7:44QK:XGEQ:GX6H:RZTY:6JAJ:CBBI:ZS6R Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): true File Descriptors: 24 Goroutines: 50 System Time: 2018-09-27T06:46:47.768576607Z EventsListeners: 2 HTTP Proxy: gateway.docker.internal:3128 HTTPS Proxy: gateway.docker.internal:3129 Registry: https://index.docker.io/v1/ Labels: Experimental: true Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false

@stroebs

This comment has been minimized.

stroebs commented Sep 27, 2018

Same issue here on a fresh installation of MacOS Mojave. Docker-CE 18.06.1-ce-mac73 (26764)
Noticed my mac was lagging. Stopping Docker resolves the lag issue.

image
image
image

@fadelardiSA

This comment has been minimized.

fadelardiSA commented Sep 27, 2018

2 GB usage here. No containers running.
Docker for Mac ver.: Version 18.06.1-ce-mac73
MacOS: 10.12.6

@troyharvey

This comment has been minimized.

troyharvey commented Sep 27, 2018

🤔 no containers running

image

@daBayrus

This comment has been minimized.

daBayrus commented Sep 28, 2018

no containers running.. OS: Mojave Version 18.06.1-ce-mac73 (26764)

image

@iMerica

This comment has been minimized.

iMerica commented Sep 28, 2018

This is my call graph while Docker is completely idle (no containers running). Version info: Docker version 18.06.1-ce, build e68fc7a.

Physical footprint:         13.2G
Physical footprint (peak):  13.2G
----

Call graph:
    7795 Thread_2043441   DispatchQueue_1: com.apple.main-thread  (serial)
    + 7795 start  (in libdyld.dylib) + 1    
    +   7795 main  (in com.docker.hyperkit) + 10565    
    +     7795 kevent  (in libsystem_kernel.dylib) + 10    
    7795 Thread_2043442
    + 7795 thread_start  (in libsystem_pthread.dylib) + 13    
    +   7795 _pthread_start  (in libsystem_pthread.dylib) + 70    
    +     7795 _pthread_body  (in libsystem_pthread.dylib) + 126    
    +       7795 caml_thread_start  (in com.docker.hyperkit) + 104    
    +         7795 caml_start_program  (in com.docker.hyperkit) + 92    
    +           7795 camlThread__fun_1562  (in com.docker.hyperkit) + 137    
    +             7795 camlLwt_main__run_1327  (in com.docker.hyperkit) + 156    
    +               7795 camlLwt_engine__fun_2951  (in com.docker.hyperkit) + 442    
    +                 7795 camlLwt_engine__fun_3012  (in com.docker.hyperkit) + 35    
    +                   7795 unix_select  (in com.docker.hyperkit) + 661    
    +                     7795 __select  (in libsystem_kernel.dylib) + 10    
    7795 Thread_2043443
    + 7795 thread_start  (in libsystem_pthread.dylib) + 13    
    +   7795 _pthread_start  (in libsystem_pthread.dylib) + 70    
    +     7795 _pthread_body  (in libsystem_pthread.dylib) + 126    
    +       7792 caml_thread_tick  (in com.docker.hyperkit) + 79    
    +       ! 7792 __select  (in libsystem_kernel.dylib) + 10    
    +       2 caml_thread_tick  (in com.docker.hyperkit) + 84    
    +       1 caml_thread_tick  (in com.docker.hyperkit) + 89    
    +         1 caml_record_signal  (in com.docker.hyperkit) + 0    
    7795 Thread_2043444
    + 7795 thread_start  (in libsystem_pthread.dylib) + 13    
    +   7795 _pthread_start  (in libsystem_pthread.dylib) + 70    
    +     7795 _pthread_body  (in libsystem_pthread.dylib) + 126    
    +       7795 worker_loop  (in com.docker.hyperkit) + 219    
    +         7795 _pthread_cond_wait  (in libsystem_pthread.dylib) + 724    
    +           7795 __psynch_cvwait  (in libsystem_kernel.dylib) + 10    
    7795 Thread_2043445: callout
    + 7795 thread_start  (in libsystem_pthread.dylib) + 13    
    +   7795 _pthread_start  (in libsystem_pthread.dylib) + 70    
    +     7795 _pthread_body  (in libsystem_pthread.dylib) + 126    
    +       7752 callout_thread_func  (in com.docker.hyperkit) + 169    
    +       ! 7739 _pthread_cond_wait  (in libsystem_pthread.dylib) + 775    
    +       ! : 7732 __psynch_cvwait  (in libsystem_kernel.dylib) + 10,12    
    +       ! : 7 cerror_nocancel  (in libsystem_kernel.dylib) + 20,6,...    
    +       ! 12 _pthread_cond_wait  (in libsystem_pthread.dylib) + 793,0,...    
    +       ! 1 DYLD-STUB$$__error  (in libsystem_pthread.dylib) + 0    
    +       32 callout_thread_func  (in com.docker.hyperkit) + 217    
    +       ! 32 vlapic_callout_handler  (in com.docker.hyperkit) + 92    
    +       !   30 vcpu_notify_event  (in com.docker.hyperkit) + 104    
    +       !   : 29 pthread_cond_signal  (in libsystem_pthread.dylib) + 488    
    +       !   : | 29 __psynch_cvsignal  (in libsystem_kernel.dylib) + 10    
    +       !   : 1 pthread_cond_signal  (in libsystem_pthread.dylib) + 515    
    +       !   :   1 _pthread_cond_updateval  (in libsystem_pthread.dylib) + 1    
    +       !   2 vcpu_notify_event  (in com.docker.hyperkit) + 112    
    +       !     2 pthread_mutex_unlock  (in libsystem_pthread.dylib) + 0,89    
    +       5 callout_thread_func  (in com.docker.hyperkit) + 70,155,...    
    +       4 callout_thread_func  (in com.docker.hyperkit) + 89    
    +       ! 4 mach_absolute_time  (in libsystem_kernel.dylib) + 18,28    
    +       2 callout_thread_func  (in com.docker.hyperkit) + 225    
    +         2 pthread_mutex_lock  (in libsystem_pthread.dylib) + 0,99    
    7795 Thread_2043446: net:ipc:tx
    + 7795 thread_start  (in libsystem_pthread.dylib) + 13    
    +   7795 _pthread_start  (in libsystem_pthread.dylib) + 70    
    +     7795 _pthread_body  (in libsystem_pthread.dylib) + 126    
    +       7795 pci_vtnet_tx_thread.897  (in com.docker.hyperkit) + 600    
    +         7795 _pthread_cond_wait  (in libsystem_pthread.dylib) + 724    
    +           7795 __psynch_cvwait  (in libsystem_kernel.dylib) + 10    
    7795 Thread_2043447: net:ipc:rx
    + 7795 thread_start  (in libsystem_pthread.dylib) + 13    
    +   7795 _pthread_start  (in libsystem_pthread.dylib) + 70    
    +     7795 _pthread_body  (in libsystem_pthread.dylib) + 126    
    +       7795 pci_vtnet_tap_select_func.898  (in com.docker.hyperkit) + 310    
    +         7795 __select  (in libsystem_kernel.dylib) + 10    
    7795 Thread_2043448: blk:2:0
    + 7795 thread_start  (in libsystem_pthread.dylib) + 13    
    +   7795 _pthread_start  (in libsystem_pthread.dylib) + 70    
    +     7795 _pthread_body  (in libsystem_pthread.dylib) + 126    
    +       7795 blockif_thr  (in com.docker.hyperkit) + 149    
    +         7795 _pthread_cond_wait  (in libsystem_pthread.dylib) + 724    
    +           7795 __psynch_cvwait  (in libsystem_kernel.dylib) + 10    
    7795 Thread_2043449: vsock:tx
    + 7795 thread_start  (in libsystem_pthread.dylib) + 13    
    +   7795 _pthread_start  (in libsystem_pthread.dylib) + 70    
    +     7795 _pthread_body  (in libsystem_pthread.dylib) + 126    
    +       7795 pci_vtsock_tx_thread  (in com.docker.hyperkit) + 150    
    +         7795 xselect  (in com.docker.hyperkit) + 32    
    +           7795 __select  (in libsystem_kernel.dylib) + 10    
    7795 Thread_2043450: vsock:rx
    + 7795 thread_start  (in libsystem_pthread.dylib) + 13    
    +   7795 _pthread_start  (in libsystem_pthread.dylib) + 70    
    +     7795 _pthread_body  (in libsystem_pthread.dylib) + 126    
    +       7795 pci_vtsock_rx_thread  (in com.docker.hyperkit) + 146    
    +         7795 xselect  (in com.docker.hyperkit) + 32    
    +           7795 __select  (in libsystem_kernel.dylib) + 10    
    7795 Thread_2043451: blk:4:0
    + 7795 thread_start  (in libsystem_pthread.dylib) + 13    
    +   7795 _pthread_start  (in libsystem_pthread.dylib) + 70    
    +     7795 _pthread_body  (in libsystem_pthread.dylib) + 126    
    +       7795 blockif_thr  (in com.docker.hyperkit) + 149    
    +         7795 _pthread_cond_wait  (in libsystem_pthread.dylib) + 724    
    +           7795 __psynch_cvwait  (in libsystem_kernel.dylib) + 10    
    7795 Thread_2043452: blk:5:0
    + 7795 thread_start  (in libsystem_pthread.dylib) + 13    
    +   7795 _pthread_start  (in libsystem_pthread.dylib) + 70    
    +     7795 _pthread_body  (in libsystem_pthread.dylib) + 126    
    +       7795 blockif_thr  (in com.docker.hyperkit) + 149    
    +         7795 _pthread_cond_wait  (in libsystem_pthread.dylib) + 724    
    +           7795 __psynch_cvwait  (in libsystem_kernel.dylib) + 10    
    7795 Thread_2043453
    + 7795 start_wqthread  (in libsystem_pthread.dylib) + 13    
    +   7795 _pthread_wqthread  (in libsystem_pthread.dylib) + 670    
    +     7795 __workq_kernreturn  (in libsystem_kernel.dylib) + 10    
    7795 Thread_2043454: vcpu:0
    + 7795 thread_start  (in libsystem_pthread.dylib) + 13    
    +   7795 _pthread_start  (in libsystem_pthread.dylib) + 70    
    +     7795 _pthread_body  (in libsystem_pthread.dylib) + 126    
    +       7795 vcpu_thread  (in com.docker.hyperkit) + 1213    
    +         7737 xh_vm_run  (in com.docker.hyperkit) + 1335    
    +         ! 7736 _pthread_cond_wait  (in libsystem_pthread.dylib) + 775    
    +         ! : 7736 __psynch_cvwait  (in libsystem_kernel.dylib) + 10    
    +         ! 1 _pthread_cond_wait  (in libsystem_pthread.dylib) + 860    
    +         45 xh_vm_run  (in com.docker.hyperkit) + 343    
    +         ! 44 vmx_run  (in com.docker.hyperkit) + 1718    
    +         ! : 44 hv_vcpu_run  (in Hypervisor) + 13    
    +         ! 1 vmx_run  (in com.docker.hyperkit) + 1790    
    +         !   1 vcpu_read_vmcs_id  (in Hypervisor) + 0    
    +         8 xh_vm_run  (in com.docker.hyperkit) + 3704    
    +         ! 4 vmm_emulate_instruction  (in com.docker.hyperkit) + 3228    
    +         ! : 3 vlapic_write  (in com.docker.hyperkit) + 379    
    +         ! : | 3 vlapic_icrtmr_write_handler  (in com.docker.hyperkit) + 285    
    +         ! : |   3 pthread_cond_signal  (in libsystem_pthread.dylib) + 488    
    +         ! : |     3 __psynch_cvsignal  (in libsystem_kernel.dylib) + 10    
    +         ! : 1 vlapic_icrlo_write_handler  (in com.docker.hyperkit) + 586    
    +         ! :   1 vcpu_notify_event  (in com.docker.hyperkit) + 104    
    +         ! :     1 pthread_cond_signal  (in libsystem_pthread.dylib) + 488    
    +         ! :       1 __psynch_cvsignal  (in libsystem_kernel.dylib) + 10    
    +         ! 2 vmm_emulate_instruction  (in com.docker.hyperkit) + 3076    
    +         ! : 1 vhpet_mmio_read  (in com.docker.hyperkit) + 77    
    +         ! : | 1 pthread_mutex_unlock  (in libsystem_pthread.dylib) + 89    
    +         ! : 1 vhpet_mmio_read  (in com.docker.hyperkit) + 169    
    +         ! :   1 vhpet_counter  (in com.docker.hyperkit) + 77    
    +         ! :     1 mach_absolute_time  (in libsystem_kernel.dylib) + 28    
    +         ! 1 vmm_emulate_instruction  (in com.docker.hyperkit) + 2554    
    +         ! : 1 vie_update_register  (in com.docker.hyperkit) + 134    
    +         ! :   1 vm_set_register  (in com.docker.hyperkit) + 51    
    +         ! :     1 vmx_setreg  (in com.docker.hyperkit) + 272    
    +         ! 1 vmm_emulate_instruction  (in com.docker.hyperkit) + 3126    
    +         2 xh_vm_run  (in com.docker.hyperkit) + 362    
    +         ! 1 vcpu_require_state  (in com.docker.hyperkit) + 15    
    +         ! : 1 vcpu_set_state  (in com.docker.hyperkit) + 50    
    +         ! :   1 pthread_mutex_lock  (in libsystem_pthread.dylib) + 7    
    +         ! 1 vcpu_require_state  (in com.docker.hyperkit) + 0    
    +         2 xh_vm_run  (in com.docker.hyperkit) + 346,3149    
    +         1 xh_vm_run  (in com.docker.hyperkit) + 314    
    +           1 vcpu_require_state  (in com.docker.hyperkit) + 15    
    +             1 vcpu_set_state  (in com.docker.hyperkit) + 0    
    7795 Thread_2043473: vcpu:1
    + 7795 thread_start  (in libsystem_pthread.dylib) + 13    
    +   7795 _pthread_start  (in libsystem_pthread.dylib) + 70    
    +     7795 _pthread_body  (in libsystem_pthread.dylib) + 126    
    +       7795 vcpu_thread  (in com.docker.hyperkit) + 1213    
    +         7760 xh_vm_run  (in com.docker.hyperkit) + 1335    
    +         ! 7760 _pthread_cond_wait  (in libsystem_pthread.dylib) + 775    
    +         !   7760 __psynch_cvwait  (in libsystem_kernel.dylib) + 10    
    +         33 xh_vm_run  (in com.docker.hyperkit) + 343    
    +         ! 32 vmx_run  (in com.docker.hyperkit) + 1718    
    +         ! : 32 hv_vcpu_run  (in Hypervisor) + 13    
    +         ! 1 vmx_run  (in com.docker.hyperkit) + 1721    
    +         1 xh_vm_run  (in com.docker.hyperkit) + 3704    
    +         ! 1 vmm_emulate_instruction  (in com.docker.hyperkit) + 3228    
    +         !   1 vlapic_icrlo_write_handler  (in com.docker.hyperkit) + 586    
    +         !     1 vcpu_notify_event  (in com.docker.hyperkit) + 104    
    +         !       1 pthread_cond_signal  (in libsystem_pthread.dylib) + 488    
    +         !         1 __psynch_cvsignal  (in libsystem_kernel.dylib) + 10    
    +         1 xh_vm_run  (in com.docker.hyperkit) + 2682    
    7795 Thread_2043474: vcpu:2
    + 7795 thread_start  (in libsystem_pthread.dylib) + 13    
    +   7795 _pthread_start  (in libsystem_pthread.dylib) + 70    
    +     7795 _pthread_body  (in libsystem_pthread.dylib) + 126    
    +       7795 vcpu_thread  (in com.docker.hyperkit) + 1213    
    +         7774 xh_vm_run  (in com.docker.hyperkit) + 1335    
    +         ! 7773 _pthread_cond_wait  (in libsystem_pthread.dylib) + 775    
    +         ! : 7773 __psynch_cvwait  (in libsystem_kernel.dylib) + 10    
    +         ! 1 _pthread_cond_wait  (in libsystem_pthread.dylib) + 860    
    +         19 xh_vm_run  (in com.docker.hyperkit) + 343    
    +         ! 19 vmx_run  (in com.docker.hyperkit) + 1718    
    +         !   19 hv_vcpu_run  (in Hypervisor) + 13    
    +         1 xh_vm_run  (in com.docker.hyperkit) + 1250    
    +         ! 1 vlapic_pending_intr  (in com.docker.hyperkit) + 18    
    +         1 xh_vm_run  (in com.docker.hyperkit) + 3704    
    +           1 vmm_emulate_instruction  (in com.docker.hyperkit) + 3228    
    +             1 vlapic_write  (in com.docker.hyperkit) + 379    
    +               1 vlapic_icrtmr_write_handler  (in com.docker.hyperkit) + 285    
    +                 1 pthread_cond_signal  (in libsystem_pthread.dylib) + 488    
    +                   1 __psynch_cvsignal  (in libsystem_kernel.dylib) + 10    
    7795 Thread_2043475: vcpu:3
    + 7795 thread_start  (in libsystem_pthread.dylib) + 13    
    +   7795 _pthread_start  (in libsystem_pthread.dylib) + 70    
    +     7795 _pthread_body  (in libsystem_pthread.dylib) + 126    
    +       7795 vcpu_thread  (in com.docker.hyperkit) + 1213    
    +         7759 xh_vm_run  (in com.docker.hyperkit) + 1335    
    +         ! 7758 _pthread_cond_wait  (in libsystem_pthread.dylib) + 775    
    +         ! : 7758 __psynch_cvwait  (in libsystem_kernel.dylib) + 10    
    +         ! 1 _pthread_cond_wait  (in libsystem_pthread.dylib) + 860    
    +         30 xh_vm_run  (in com.docker.hyperkit) + 343    
    +         ! 29 vmx_run  (in com.docker.hyperkit) + 1718    
    +         ! : 29 hv_vcpu_run  (in Hypervisor) + 13    
    +         ! 1 vmx_run  (in com.docker.hyperkit) + 131    
    +         !   1 hv_vmx_vcpu_write_vmcs  (in Hypervisor) + 0    
    +         3 xh_vm_run  (in com.docker.hyperkit) + 3704    
    +         ! 3 vmm_emulate_instruction  (in com.docker.hyperkit) + 3228    
    +         !   2 vlapic_write  (in com.docker.hyperkit) + 379    
    +         !   : 2 vlapic_icrtmr_write_handler  (in com.docker.hyperkit) + 285    
    +         !   :   2 pthread_cond_signal  (in libsystem_pthread.dylib) + 488    
    +         !   :     2 __psynch_cvsignal  (in libsystem_kernel.dylib) + 10    
    +         !   1 vlapic_icrlo_write_handler  (in com.docker.hyperkit) + 586    
    +         !     1 vcpu_notify_event  (in com.docker.hyperkit) + 46    
    +         !       1 pthread_mutex_lock  (in libsystem_pthread.dylib) + 99    
    +         1 xh_vm_run  (in com.docker.hyperkit) + 314    
    +         ! 1 vcpu_require_state  (in com.docker.hyperkit) + 15    
    +         !   1 vcpu_set_state  (in com.docker.hyperkit) + 50    
    +         !     1 pthread_mutex_lock  (in libsystem_pthread.dylib) + 7    
    +         1 xh_vm_run  (in com.docker.hyperkit) + 1629    
    +         ! 1 vm_copy_setup  (in com.docker.hyperkit) + 131    
    +         !   1 vm_gla2gpa  (in com.docker.hyperkit) + 609    
    +         1 xh_vm_run  (in com.docker.hyperkit) + 318    
    7795 Thread_2043731: 9p:port
      7795 thread_start  (in libsystem_pthread.dylib) + 13    
        7795 _pthread_start  (in libsystem_pthread.dylib) + 70    
          7795 _pthread_body  (in libsystem_pthread.dylib) + 126    
            7795 pci_vt9p_thread  (in com.docker.hyperkit) + 816    
              7795 read  (in libsystem_kernel.dylib) + 10    

Total number in stack (recursive counted multiple, when >=5):
        16       _pthread_body  (in libsystem_pthread.dylib) + 126    
        16       _pthread_start  (in libsystem_pthread.dylib) + 70    
        16       thread_start  (in libsystem_pthread.dylib) + 13    
        10       __psynch_cvwait  (in libsystem_kernel.dylib) + 0    
        6       __psynch_cvsignal  (in libsystem_kernel.dylib) + 0    
        6       pthread_cond_signal  (in libsystem_pthread.dylib) + 488    
        5       __select  (in libsystem_kernel.dylib) + 0    
        5       _pthread_cond_wait  (in libsystem_pthread.dylib) + 724    
        5       _pthread_cond_wait  (in libsystem_pthread.dylib) + 775    

Sort by top of stack, same collapsed (when >= 5):
        __psynch_cvwait  (in libsystem_kernel.dylib)        77734
        __select  (in libsystem_kernel.dylib)        38972
        __workq_kernreturn  (in libsystem_kernel.dylib)        7795
        kevent  (in libsystem_kernel.dylib)        7795
        read  (in libsystem_kernel.dylib)        7795
        hv_vcpu_run  (in Hypervisor)        124
        __psynch_cvsignal  (in libsystem_kernel.dylib)        37
        _pthread_cond_wait  (in libsystem_pthread.dylib)        15
        cerror_nocancel  (in libsystem_kernel.dylib)        7
        callout_thread_func  (in com.docker.hyperkit)        5
        mach_absolute_time  (in libsystem_kernel.dylib)        5
        pthread_mutex_lock  (in libsystem_pthread.dylib)        5

@EduardRakov

This comment has been minimized.

EduardRakov commented Sep 28, 2018

cpu is loaded too much as well as ram and my macbook lagging :( this problem was appeared after updating from high sierra to mojave. It's really annoying, folks, do you have maybe some thoughts/estimates when the problem can be fixed?

@misha21742

This comment has been minimized.

misha21742 commented Nov 2, 2018

macOS Mojave version 10.14
2018-11-02 13 20 30

@mnorkin

This comment has been minimized.

mnorkin commented Nov 2, 2018

MacOS Mojave, running 0 containers

image

@naviat

This comment has been minimized.

naviat commented Nov 6, 2018

MacOS Mojave, 0 container
screen shot 2018-11-06 at 17 01 38

@mcgitty

This comment has been minimized.

mcgitty commented Nov 6, 2018

We can stop complaining. The first reply to this thread "ijc commented on Aug 9, 2016" stated that hogging the memory is a hyperkit VM behavior:

https://docs.docker.com/docker-for-mac/docker-toolbox/#the-docker-for-mac-environment
https://github.com/moby/hyperkit
https://developer.apple.com/documentation/hypervisor

On Linux, there is no such problem, because the host kernel supports cgroup. Not all Docker engines are created equal.

@pmingram

This comment has been minimized.

pmingram commented Nov 11, 2018

We can stop complaining. The first reply to this thread "ijc commented on Aug 9, 2016" stated that hogging the memory is a hyperkit VM behavior:

Without people reporting an obvious major flaw in a system, we can never progress and improve.

@pabloleone

This comment has been minimized.

pabloleone commented Nov 13, 2018

I'm moving to docker because it takes fewer resources...jajaja

It's taking 4GB of RAM to run with no containers started. I understand the reply, but still an issue. Any solutions?

@iMerica

This comment has been minimized.

iMerica commented Nov 13, 2018

@mcgitty

We can stop complaining.

A Github issue is not a "Complaint" .

https://en.wikipedia.org/wiki/Software_bug

@envygeeks

This comment has been minimized.

envygeeks commented Nov 14, 2018

On Linux, there is no such problem, because the host kernel supports cgroup.

So because there is no bug on Linux, that means there must be no bug on MacOS? This is a very real problem. And the original explanation of why this is happening is complete hogwash. I've only booted up Alpine Linux, you're telling me that Alpine Linux running on Docker for MacOS used 3gb of ram, and now suddenly Docker can't reclaim it through Hyperkit?

@ahayes99

This comment has been minimized.

ahayes99 commented Nov 15, 2018

screenshot 2018-11-15 at 15 15 36

up to 10gb now.. keep up the good work docker

@gvbkr

This comment has been minimized.

gvbkr commented Nov 15, 2018

At this point, its not practical to run any docker containers in my 2018 MAC as a basic django app running inside docker container hogs upto 28G of Memory on the MAC. This is a real problem and it forced me to just use virtualenv on the MAC directly instead of using the container.

Does anyone who works on this repo uses MAC for development? I am assuming many engineers working at real companies are facing this problem, so we need a solution to solve this problem.

@robotdan

This comment has been minimized.

robotdan commented Nov 15, 2018

Same here, I fire up Docker so I can build and publish images, and then I have to keep it turned off otherwise it chews through all of my RAM.

The offending process is always com.docker.hyperkit

MacBook Pro (15-inch 2016)
16 GB RAM
macOS Mojave version 10.14.1

@iMerica

This comment has been minimized.

iMerica commented Nov 15, 2018

It would be nice if someone from Docker would comment on this issue to let us know either A) They don't have the bandwidth to fix this bug or B) They are aware of the issue and someone is already working on it.

The lack of communication is more frustrating than the bug itself.

@KatSick

This comment has been minimized.

KatSick commented Nov 20, 2018

Same for me. 1 container running
image

@robotdan

This comment has been minimized.

robotdan commented Nov 20, 2018

Thanks @KatSick same here.

@ijc Since this issue is closed, can you advise on how we can best move forward here?

I think we can all agree that an idle process should not consumer 4+ GB of RAM without ever releasing it. There certainly does seem to be some sort of leak going on here.

Could we collect additional debug information to assist in resolving this issue, or based upon your knowledge of the system, do you have any guesses on where this issue may reside so we can open bugs elsewhere?

From the end user perspective, the com.docker.hyperkit consumes a lot of RAM when it is idle so this does seem to a reasonable place to open issues, but if you can please provide some insight into where we should focus our attention that would be really helpful.

Thanks, I'm happy to help in any way that I can.

@eddiemonge

This comment has been minimized.

eddiemonge commented Nov 20, 2018

Another issue is open for the same thing #3232

@iMerica

This comment has been minimized.

iMerica commented Nov 22, 2018

This issue persists in the newest version just released (2.0.0.0-mac78 28905).

Diagnostic ID CFA59942-CFDD-4EDB-8985-5B335DA897E3/20181122053327

In case anyone is interested in what the latest version of Docker For Mac includes:

Release Notes:
* Upgrades
  - [Docker 18.09.0](https://github.com/docker/docker-ce-packaging/releases/tag/v18.09.0)
  - [Docker compose 1.23.1](https://github.com/docker/compose/releases/tag/1.23.1)
  - [Docker Machine 0.16.0](https://github.com/docker/machine/releases/tag/v0.16.0)
  - [Kitematic 0.17.5](https://github.com/docker/kitematic/releases/tag/v0.17.5)
  - Linux Kernel 4.9.125

* New
  - New version scheme

* Deprecation
  - Removed support of AUFS
  - Removed support of OSX 10.11

* Bug fixes and minor changes
  - Fix appearance in dark mode for OSX 10.14 (Mojave)
  - VPNKit: Improved scalability of port forwarding. Related to [docker/for-mac#2841](https://github.com/docker/for-mac/issues/2841)
  - VPNKit: Limit the size of the UDP NAT table. This ensures port forwarding and regular TCP traffic continue even when running very chatty UDP protocols.
  - Ensure Kubernetes can be installed when using a non-default internal IP subnet.
  - Fix panic in diagnose
@adampatterson

This comment has been minimized.

adampatterson commented Nov 22, 2018

My Docker Deamon was using just over 4 gigs of memory without anything running.

Containers: 76
Running: 0
Paused: 0
Stopped: 76
Images: 446
Server Version: 18.09.0
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 468a545b9edcd5932818eb9de8e72413e616e86e
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.125-linuxkit
Operating System: Docker for Mac
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 1.952GiB
Name: linuxkit-025000000001
ID: UDXG:4JIP:BRDP:D4G6:PPX7:TVA7:IHRY:5WWZ:S77K:L6M3:MV6I:5L3G
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 24
Goroutines: 50
System Time: 2018-11-22T07:39:47.055602708Z
EventsListeners: 2
HTTP Proxy: gateway.docker.internal:3128
HTTPS Proxy: gateway.docker.internal:3129
Registry: https://index.docker.io/v1/
Labels:
Experimental: true
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine

image

ref moby/hyperkit/issues/231
ref /issues/3304

@eexbee

This comment has been minimized.

eexbee commented Nov 26, 2018

Same issue on Mojave

@azamuddin

This comment has been minimized.

azamuddin commented Nov 27, 2018

Same issue on mac Mojave, eating 4GB of my RAM with no containers running

@andrewalf

This comment has been minimized.

andrewalf commented Nov 27, 2018

Please fix memory and cpu usage, empty docker eats ram and battery. Macos Mojave too :(

@luisfdeandrade

This comment has been minimized.

luisfdeandrade commented Nov 28, 2018

Same issue here!
screen shot 2018-11-28 at 11 28 44

@mcgitty

This comment has been minimized.

mcgitty commented Nov 28, 2018

We can stop complaining. The first reply to this thread "ijc commented on Aug 9, 2016" stated that hogging the memory is a hyperkit VM behavior:

I meant we may be asking the wrong team to fix this. Probably this one: https://github.com/moby/hyperkit

That said, the docker for Mac team doesn't seem to care enough to reply here and promise to work with the hyperkit team.

@robotdan

This comment has been minimized.

robotdan commented Nov 28, 2018

This issue looks promising on the hyperkit project.
moby/hyperkit#231

@KyeRussell

This comment has been minimized.

KyeRussell commented Nov 30, 2018

@mcgitty I'm sure they'll give you your money back. 🙄

Please keep things professional.

@mcgitty

This comment has been minimized.

mcgitty commented Nov 30, 2018

@KyeRussell You mean they'll give me my "memory" back. I certainly hope so, soon.

I just want to remind everyone, this ticket is "Closed", so unlikely to get any attention from the team. We should follow the moby/hyperkit#231 issue pointed out by robotdan.

@shmink shmink referenced this issue Dec 11, 2018

Open

Containers running that don't exist locally. #3402

2 of 2 tasks complete
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment