Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

High memory usage for docker daemon process #8502

Closed
cnf opened this issue Oct 10, 2014 · 17 comments · Fixed by #10347
Closed

High memory usage for docker daemon process #8502

cnf opened this issue Oct 10, 2014 · 17 comments · Fixed by #10347

Comments

@cnf
Copy link
Contributor

cnf commented Oct 10, 2014

Docker process was using almost all of my memory (4 gig total)

$ top
PID USER      PR  NI    VIRT    RES    SHR S %CPU %MEM     TIME+ COMMAND 
838 root      20   0 6698.0m 3.185g   1.7m S  0.3 82.5  10:34.55 docker

It sorted itself out (killing all my containers in the process), but not before bringing the entire system down to its knees.

Any hints on how to debug/prevent this in the future?

Docker Info

# docker version
Client version: 1.2.0
Client API version: 1.14
Go version (client): go1.3.1
Git commit (client): fa7b24f
OS/Arch (client): linux/amd64
Server version: 1.2.0
Server API version: 1.14
Go version (server): go1.3.1
Git commit (server): fa7b24f
# docker info
Containers: 14
Images: 202
Storage Driver: btrfs
Execution Driver: native-0.2
Kernel Version: 3.13.0-36-generic
Operating System: Ubuntu 14.04.1 LTS
Username: ...
Registry: [https://index.docker.io/v1/]
@cnf
Copy link
Contributor Author

cnf commented Oct 10, 2014

On closer inspection, it seems to have restarted itself, which would explain why all my containers where killed.

Same question still stands, though.

@unclejack
Copy link
Contributor

@cnf What was running on the Docker daemon? Was it running some containers which were printing a lot to stdout or stderr? Did you run a very large number of containers? Did you push/pull a lot of images?

@cnf
Copy link
Contributor Author

cnf commented Oct 10, 2014

@unclejack as docker info shows, Containers: 14, Images: 202. 9 of those are running, and 5 data containers.

There is some, logging, but not a lot:

# find . -name '*-json.log' -exec ls -lh {} \;|awk '{print $5}'|grep -v '^0'
120K
29K
105M
24M
6.3M
56K
88K

@unclejack
Copy link
Contributor

@cnf Did you run docker logs on those containers a bunch of times? The only way this would happen is through the logging.

The memory you've seen wasn't memory allocated and still used by Docker itself. It was memory which was used at one point and wasn't released back to the OS by the Go runtime.

I suspect this is indeed caused by logging because you have a log which is 105 MB on disk and you don't seem to have done much else which could have caused this (like pulling/pushing images, building a lot and so on). Logging has been a known problem for a while, including writing a lot to stdout, stderr, using docker logs).

Docker 1.3 won't have this problem any more for logging, pushing and pulling. I think those were the last areas of the code where there were some problems in 1.2 regarding memory allocations.

@cnf
Copy link
Contributor Author

cnf commented Oct 10, 2014

I can't replicate it with running docker logs on the one with 105M worth of logs... and even those where accrued over a few weeks, nothing sudden.

@paultag
Copy link

paultag commented Jan 6, 2015

I see this on my machines too. Both aufs and overlayfs.

From top, sorted by memory:

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND     
  814 root      20   0 2437452 656112   8228 S   0.0 32.0  33:21.92 docker      
10901 root      20   0  157160  32012   6104 S   0.0  1.6   0:02.05 uwsgi       
24142 root      20   0  147696  30696   3336 S   0.0  1.5   0:03.76 moxie-serve 
32415 ntp       20   0  145044  20520   7412 S   0.0  1.0   0:08.32 irssi       
31987 root      20   0  198688  11200   9752 S   0.0  0.5   0:00.02 docker      

The problem is much worse with aufs. Overlayfs delays this and only has the daemon going into high memory usage.

@efuquen
Copy link

efuquen commented Jan 12, 2015

I'm using docker 1.3.2 and I still have problems with the docker daemon using large amounts of memory (was having the same problems on 1.2). I'm running a cluster of 3 boxes which have the same containers running on all of them. The environments are identical, same aws instance types, same OS image. And whenever we run deploys we run the exact same commands on all them, thus resulting in the same number of docker pulls, same containers running, same number of images on each, etc.

But, no matter what after some time at least one of the boxes docker daemon's will start using a large amount of memory (4+ GB), while the others will stay normal (100-200MB). The fact that the environments are very similar and yet the problem only manifests itself sporadically leads me to think whatever causes the high memory usage has some randomness to it, not directly correlated to # of images/containers/logs/pulls/etc. Currently we end up just rebooting the box when this occurs since we run a HA setup across the 3 boxes, but this is obviously less than ideal. Below is a graph of usable memory, showing the leaking docker daemon box.

docker_high_memory_usage

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root      1401  4.7 65.7 11550016 5038500 ?    Ssl   2014 2100:57 /usr/bin/docker --daemon --storage-driver=btrfs --host=fd:// --insecure-registry="0.0.0.0/0"
root      7752  0.0  1.0 807648 79648 ?        Sl    2014   8:58 node server.js
root     28247  0.0  1.0 997476 77532 ?        Sl    2014  14:41 node index.js
root     21993  0.6  0.9 789340 76524 ?        Sl    2014 122:29 gulp

@LK4D4
Copy link
Contributor

LK4D4 commented Jan 12, 2015

@efuquen Thanks for report. Just for note: how do you run your containers? With some init system in foreground?

@efuquen
Copy link

efuquen commented Jan 12, 2015

@LK4D4 they're running as systemd services.

@LK4D4
Copy link
Contributor

LK4D4 commented Jan 12, 2015

@efuquen Thanks! Then this is probably because of high amount of logs between daemon and client. We'll try to reduce memory usage of that code path. We have some ideas already.

@efuquen
Copy link

efuquen commented Jan 12, 2015

@LK4D4 Great to hear. Just to be clear, when you say 'logs between daemon and client' do you mean output that would show up in docker logs or some other logging going on? Are there any recommendations to mitigate the issue in the mean time? Finally, should I keep track of this ticket for any status updates or any PR that would be opened to help address it?

@LK4D4
Copy link
Contributor

LK4D4 commented Jan 12, 2015

@efuquen I'm saying that when you do docker run without -d you get all output from container through unix socket from daemon to client. You can try to reduce logging output of your applications or use syslog for them through /dev/log. I know this is bullshit, but now I can't imagine other fixes for high-output applications :(

@jbiel
Copy link
Contributor

jbiel commented Jan 24, 2015

+1. This one is affecting us.

memory usage

Client version: 1.4.1
Client API version: 1.16
Go version (client): go1.3.3
Git commit (client): 5bc2ff8
OS/Arch (client): linux/amd64
Server version: 1.4.1
Server API version: 1.16
Go version (server): go1.3.3
Git commit (server): 5bc2ff8

Containers: 6
Images: 21
Storage Driver: aufs
 Root Dir: /mnt/var/lib/docker/aufs
 Dirs: 33
Execution Driver: native-0.2
Kernel Version: 3.13.0-43-generic
Operating System: Ubuntu 14.04.1 LTS
CPUs: 2
Total Memory: 3.676 GiB
Name: ip-10-136-123-121
ID: ZEEF:OEBM:XTKF:DFDB:BYDE:3K7G:TNQ6:XL64:QU25:T5FH:HYA6:LHBG
WARNING: No swap limit support

@unclejack
Copy link
Contributor

This problem is the exact same one as #9139.

#10347 should help fix this problem for long running containers.

@jessfraz
Copy link
Contributor

closing as dup of #9139

@priscofarina
Copy link

Sorry guys but I didn't get the point!
My problem is that I have several pods running on K8S cluster and over there I can see that all pods are using something like 4 GB of RAM and there is "dockerd" daemon is using 7 GB of RAM.
Is this normal?

@thaJeztah
Copy link
Member

@darkprisco this issue was created over three years ago, which means the code that was running at the time likely has been completely rewritten (or not even in this repository anymore).

If you think there's a bug, and have information to reproduce/investigate, it's best to open a new issue with the information that's requested in the issue template.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants