New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High memory usage for docker daemon process #8502
Comments
On closer inspection, it seems to have restarted itself, which would explain why all my containers where killed. Same question still stands, though. |
@cnf What was running on the Docker daemon? Was it running some containers which were printing a lot to stdout or stderr? Did you run a very large number of containers? Did you push/pull a lot of images? |
@unclejack as docker info shows, Containers: 14, Images: 202. 9 of those are running, and 5 data containers. There is some, logging, but not a lot:
|
@cnf Did you run The memory you've seen wasn't memory allocated and still used by Docker itself. It was memory which was used at one point and wasn't released back to the OS by the Go runtime. I suspect this is indeed caused by logging because you have a log which is 105 MB on disk and you don't seem to have done much else which could have caused this (like pulling/pushing images, building a lot and so on). Logging has been a known problem for a while, including writing a lot to stdout, stderr, using Docker 1.3 won't have this problem any more for logging, pushing and pulling. I think those were the last areas of the code where there were some problems in 1.2 regarding memory allocations. |
I can't replicate it with running |
I see this on my machines too. Both aufs and overlayfs. From top, sorted by memory:
The problem is much worse with aufs. Overlayfs delays this and only has the daemon going into high memory usage. |
I'm using docker 1.3.2 and I still have problems with the docker daemon using large amounts of memory (was having the same problems on 1.2). I'm running a cluster of 3 boxes which have the same containers running on all of them. The environments are identical, same aws instance types, same OS image. And whenever we run deploys we run the exact same commands on all them, thus resulting in the same number of docker pulls, same containers running, same number of images on each, etc. But, no matter what after some time at least one of the boxes docker daemon's will start using a large amount of memory (4+ GB), while the others will stay normal (100-200MB). The fact that the environments are very similar and yet the problem only manifests itself sporadically leads me to think whatever causes the high memory usage has some randomness to it, not directly correlated to # of images/containers/logs/pulls/etc. Currently we end up just rebooting the box when this occurs since we run a HA setup across the 3 boxes, but this is obviously less than ideal. Below is a graph of usable memory, showing the leaking docker daemon box.
|
@efuquen Thanks for report. Just for note: how do you run your containers? With some init system in foreground? |
@LK4D4 they're running as systemd services. |
@efuquen Thanks! Then this is probably because of high amount of logs between daemon and client. We'll try to reduce memory usage of that code path. We have some ideas already. |
@LK4D4 Great to hear. Just to be clear, when you say 'logs between daemon and client' do you mean output that would show up in |
@efuquen I'm saying that when you do |
+1. This one is affecting us.
|
closing as dup of #9139 |
Sorry guys but I didn't get the point! |
@darkprisco this issue was created over three years ago, which means the code that was running at the time likely has been completely rewritten (or not even in this repository anymore). If you think there's a bug, and have information to reproduce/investigate, it's best to open a new issue with the information that's requested in the issue template. |
Docker process was using almost all of my memory (4 gig total)
It sorted itself out (killing all my containers in the process), but not before bringing the entire system down to its knees.
Any hints on how to debug/prevent this in the future?
Docker Info
The text was updated successfully, but these errors were encountered: