memory leak in buffer grow #9139

Closed
e-max opened this Issue Nov 13, 2014 · 48 comments

Comments

Projects
None yet
@e-max

e-max commented Nov 13, 2014

Hi!
I've experienced problem with memory leak which looks pretty simular to #8084 .

docker 1.3.0

go tool pprof on heap looks like:

Total: 8659.9 MB
     0.0   0.0%   0.0%   8658.9 100.0% runtime.gosched0
     0.0   0.0%  99.8%   8638.3  99.7% github.com/docker/docker/daemon.func·006
     4.5   0.1%   0.1%   8643.9  99.8% io.Copy
     0.0   0.0%  99.8%   8637.8  99.7% github.com/docker/docker/engine.(*Output).Write
     0.0   0.0%   0.1%   8639.9  99.8% bytes.(*Buffer).Write
     0.0   0.0%   0.1%   8639.9  99.8% bytes.(*Buffer).grow
  8639.9  99.8%  99.8%   8639.9  99.8% bytes.makeSlice

I don't know how to reproduce it with simple example but if you need any additional information I can gather it from out production project.

@anandkumarpatel

This comment has been minimized.

Show comment
Hide comment
@anandkumarpatel

anandkumarpatel Nov 13, 2014

Contributor

+1 I am seeing gorouting leaks as well. I have seen my Goroutines count go up to 10,000 once a week with.
@e-max do you have instructions on how you got the data you did?
nm, found it go tool pprof http://localhost:4343/debug/pprof/profile
when docker is run in debug mode

Contributor

anandkumarpatel commented Nov 13, 2014

+1 I am seeing gorouting leaks as well. I have seen my Goroutines count go up to 10,000 once a week with.
@e-max do you have instructions on how you got the data you did?
nm, found it go tool pprof http://localhost:4343/debug/pprof/profile
when docker is run in debug mode

@LK4D4

This comment has been minimized.

Show comment
Hide comment
@LK4D4

LK4D4 Nov 13, 2014

Contributor

Also, guys you can see line numbers like --lines or something like this.

Contributor

LK4D4 commented Nov 13, 2014

Also, guys you can see line numbers like --lines or something like this.

@e-max

This comment has been minimized.

Show comment
Hide comment
@e-max

e-max Nov 14, 2014

Sure. I'v got something like this:

     0.0   0.0%   0.0%  26719.6 100.0% runtime.gosched0 /usr/lib/go/src/pkg/runtime/proc.c:1436
     0.0   0.0% 100.0%  26712.3 100.0% github.com/docker/docker/daemon.func·006 /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.3.0/work/docker-1.3.0/.gopath/src/github.com/docker/docker/daemon/attach.go:225
     0.0   0.0% 100.0%  26715.1 100.0% io.Copy /usr/lib/go/src/pkg/io/io.go:355
     0.0   0.0% 100.0%  26711.8 100.0% github.com/docker/docker/engine.(*Output).Write /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.3.0/work/docker-1.3.0/.gopath/src/github.com/docker/docker/engine/streams.go:90
     0.0   0.0%   0.0%  26715.6 100.0% bytes.(*Buffer).Write /usr/lib/go/src/pkg/bytes/buffer.go:127
     0.0   0.0%   0.0%  26715.6 100.0% bytes.(*Buffer).grow /usr/lib/go/src/pkg/bytes/buffer.go:99
 26715.6 100.0% 100.0%  26715.6 100.0% bytes.makeSlice /usr/lib/go/src/pkg/bytes/buffer.go:191

e-max commented Nov 14, 2014

Sure. I'v got something like this:

     0.0   0.0%   0.0%  26719.6 100.0% runtime.gosched0 /usr/lib/go/src/pkg/runtime/proc.c:1436
     0.0   0.0% 100.0%  26712.3 100.0% github.com/docker/docker/daemon.func·006 /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.3.0/work/docker-1.3.0/.gopath/src/github.com/docker/docker/daemon/attach.go:225
     0.0   0.0% 100.0%  26715.1 100.0% io.Copy /usr/lib/go/src/pkg/io/io.go:355
     0.0   0.0% 100.0%  26711.8 100.0% github.com/docker/docker/engine.(*Output).Write /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.3.0/work/docker-1.3.0/.gopath/src/github.com/docker/docker/engine/streams.go:90
     0.0   0.0%   0.0%  26715.6 100.0% bytes.(*Buffer).Write /usr/lib/go/src/pkg/bytes/buffer.go:127
     0.0   0.0%   0.0%  26715.6 100.0% bytes.(*Buffer).grow /usr/lib/go/src/pkg/bytes/buffer.go:99
 26715.6 100.0% 100.0%  26715.6 100.0% bytes.makeSlice /usr/lib/go/src/pkg/bytes/buffer.go:191
@LK4D4

This comment has been minimized.

Show comment
Hide comment
@LK4D4

LK4D4 Nov 18, 2014

Contributor

I think I've found leakage - in pkg/ioutils/readers.go:bufReader. On massive client reads much slower than output comes (I used yes <long string> for reproducing) and bufReader repeatedly growing internal buffer, which causes memory leaks.
Have no idea what to do now, removing bufReader layer can reduce leak, but not fix it entirely.

Contributor

LK4D4 commented Nov 18, 2014

I think I've found leakage - in pkg/ioutils/readers.go:bufReader. On massive client reads much slower than output comes (I used yes <long string> for reproducing) and bufReader repeatedly growing internal buffer, which causes memory leaks.
Have no idea what to do now, removing bufReader layer can reduce leak, but not fix it entirely.

@adamhadani

This comment has been minimized.

Show comment
Hide comment
@adamhadani

adamhadani Dec 10, 2014

Hey,
we're running into a potential docker memory leak situation here and this is the most recent open / potentially relevant issue i found - is there any way to try and see if my situation matches what you guys are describing? e.g based on pprof heap output ?

Hey,
we're running into a potential docker memory leak situation here and this is the most recent open / potentially relevant issue i found - is there any way to try and see if my situation matches what you guys are describing? e.g based on pprof heap output ?

@LK4D4

This comment has been minimized.

Show comment
Hide comment
@LK4D4

LK4D4 Dec 10, 2014

Contributor

@adamhadani best way to try is run containers in backgorund (-d) and see if leak is gone.

Contributor

LK4D4 commented Dec 10, 2014

@adamhadani best way to try is run containers in backgorund (-d) and see if leak is gone.

@adamhadani

This comment has been minimized.

Show comment
Hide comment
@adamhadani

adamhadani Dec 11, 2014

@LK4D4 thanks. since i'm running all my containers by way of an upstart script I don't typically use -d, any good way to interoperate -d and having the container ran via upstart? e.g I guess i could run 'docker run -d', and then try upstart's "expect fork"/"expect daemon" stanza?

@LK4D4 thanks. since i'm running all my containers by way of an upstart script I don't typically use -d, any good way to interoperate -d and having the container ran via upstart? e.g I guess i could run 'docker run -d', and then try upstart's "expect fork"/"expect daemon" stanza?

@LK4D4

This comment has been minimized.

Show comment
Hide comment
@LK4D4

LK4D4 Dec 11, 2014

Contributor

@adamhadani Main problem that all forks is in daemon, not in client. I know nothing about upstart, but for openrc for example it is possible to write init script which will be based on --cidfile of docker container. Also, docker wait can be useful, but I don't know how to handle stop with it.

Contributor

LK4D4 commented Dec 11, 2014

@adamhadani Main problem that all forks is in daemon, not in client. I know nothing about upstart, but for openrc for example it is possible to write init script which will be based on --cidfile of docker container. Also, docker wait can be useful, but I don't know how to handle stop with it.

@jszwedko

This comment has been minimized.

Show comment
Hide comment
@jszwedko

jszwedko Dec 11, 2014

We are seeing the same issue with log heavy applications. We also use upstart to manage the running container in the foreground.

We are seeing the same issue with log heavy applications. We also use upstart to manage the running container in the foreground.

@LK4D4

This comment has been minimized.

Show comment
Hide comment
@LK4D4

LK4D4 Dec 11, 2014

Contributor

@jszwedko Yeah, I know guys, this is pretty bad thing. I can suggest for now redirect your logs to syslog and try to keep stdout/stderr clean. We'll try to reduce leak in next release.

Contributor

LK4D4 commented Dec 11, 2014

@jszwedko Yeah, I know guys, this is pretty bad thing. I can suggest for now redirect your logs to syslog and try to keep stdout/stderr clean. We'll try to reduce leak in next release.

@jszwedko

This comment has been minimized.

Show comment
Hide comment
@jszwedko

jszwedko Dec 11, 2014

Thanks @LK4D4. We were able to mitigate it for now by reducing the log volume.

We could log directly to syslog, but we tend to prefer the approach of having the application log to stdout/stderr and let the supervising process decide what to do with the output (e.g. log to file, pipe to logger, etc).

Thanks @LK4D4. We were able to mitigate it for now by reducing the log volume.

We could log directly to syslog, but we tend to prefer the approach of having the application log to stdout/stderr and let the supervising process decide what to do with the output (e.g. log to file, pipe to logger, etc).

@ernstnaezer

This comment has been minimized.

Show comment
Hide comment
@ernstnaezer

ernstnaezer Dec 17, 2014

I might have a simple reproduction scenario.

When I docker run -d a container based on the following Dockerfile:

FROM busybox
CMD while true; do echo -n =; done;

it doesn't take very long for an out of memory crash.

Running a version that does print a newline on each row seems to drastically reduce the memory footprint. I'm suspecting this has something to do with stdout being line buffered on a terminal.

Docker is started via Upstart and thus recovered, alas the running containers have died.

fatal error: runtime: out of memory

goroutine 61 [running]:
runtime.throw(0x127cc77)
    /usr/local/go/src/pkg/runtime/panic.c:520 +0x69 fp=0x7faf0c2679a8 sp=0x7faf0c267990
runtime.SysMap(0xc239980000, 0x6060000, 0x2f00, 0x12ad4d8)
    /usr/local/go/src/pkg/runtime/mem_linux.c:147 +0x93 fp=0x7faf0c2679d8 sp=0x7faf0c2679a8
runtime.MHeap_SysAlloc(0x12b94c0, 0x6060000)
    /usr/local/go/src/pkg/runtime/malloc.goc:616 +0x15b fp=0x7faf0c267a30 sp=0x7faf0c2679d8
MHeap_Grow(0x12b94c0, 0x3030)
    /usr/local/go/src/pkg/runtime/mheap.c:319 +0x5d fp=0x7faf0c267a70 sp=0x7faf0c267a30
MHeap_AllocLocked(0x12b94c0, 0x302e, 0x7faf00000000)
    /usr/local/go/src/pkg/runtime/mheap.c:222 +0x379 fp=0x7faf0c267ab0 sp=0x7faf0c267a70
runtime.MHeap_Alloc(0x12b94c0, 0x302e, 0x100000000)
    /usr/local/go/src/pkg/runtime/mheap.c:178 +0x7b fp=0x7faf0c267ad8 sp=0x7faf0c267ab0
largealloc(0x9, 0x7faf0c267b88)
    /usr/local/go/src/pkg/runtime/malloc.goc:224 +0xa2 fp=0x7faf0c267b20 sp=0x7faf0c267ad8
runtime.mallocgc(0x605c000, 0x0, 0x9)
    /usr/local/go/src/pkg/runtime/malloc.goc:169 +0xb6 fp=0x7faf0c267b88 sp=0x7faf0c267b20
runtime.stringtoslicebyte(0xc22d928000, 0x605b2c3, 0x0, 0x0, 0x0)
    /usr/local/go/src/pkg/runtime/string.goc:320 +0x65 fp=0x7faf0c267bb0 sp=0x7faf0c267b88
github.com/docker/docker/pkg/broadcastwriter.(*BroadcastWriter).Write(0xc208207c40, 0xc20821a000, 0x8000, 0x8000, 0x8000, 0x0, 0x0)
    /go/src/github.com/docker/docker/pkg/broadcastwriter/broadcastwriter.go:54 +0x315 fp=0x7faf0c267e40 sp=0x7faf0c267bb0
io.Copy(0x7faf0e518f08, 0xc208207c40, 0x7faf0e50c3a0, 0xc20803d5e0, 0x60532c3, 0x0, 0x0)
    /usr/local/go/src/pkg/io/io.go:355 +0x27b fp=0x7faf0c267f08 sp=0x7faf0c267e40
os/exec.func·003(0x0, 0x0)
    /usr/local/go/src/pkg/os/exec/exec.go:214 +0x7e fp=0x7faf0c267f68 sp=0x7faf0c267f08
os/exec.func·004(0xc20816dbc0)
    /usr/local/go/src/pkg/os/exec/exec.go:321 +0x2c fp=0x7faf0c267fa0 sp=0x7faf0c267f68
runtime.goexit()
    /usr/local/go/src/pkg/runtime/proc.c:1445 fp=0x7faf0c267fa8 sp=0x7faf0c267fa0
created by os/exec.(*Cmd).Start
    /usr/local/go/src/pkg/os/exec/exec.go:322 +0x931

system and version information

Docker version  : 1.4.0, build 4595d4f
Distributor ID  : Ubuntu
Description     : Ubuntu 14.04.1 LTS
Release         : 14.04
Codename        : trusty
Memory          : 1Gb

I might have a simple reproduction scenario.

When I docker run -d a container based on the following Dockerfile:

FROM busybox
CMD while true; do echo -n =; done;

it doesn't take very long for an out of memory crash.

Running a version that does print a newline on each row seems to drastically reduce the memory footprint. I'm suspecting this has something to do with stdout being line buffered on a terminal.

Docker is started via Upstart and thus recovered, alas the running containers have died.

fatal error: runtime: out of memory

goroutine 61 [running]:
runtime.throw(0x127cc77)
    /usr/local/go/src/pkg/runtime/panic.c:520 +0x69 fp=0x7faf0c2679a8 sp=0x7faf0c267990
runtime.SysMap(0xc239980000, 0x6060000, 0x2f00, 0x12ad4d8)
    /usr/local/go/src/pkg/runtime/mem_linux.c:147 +0x93 fp=0x7faf0c2679d8 sp=0x7faf0c2679a8
runtime.MHeap_SysAlloc(0x12b94c0, 0x6060000)
    /usr/local/go/src/pkg/runtime/malloc.goc:616 +0x15b fp=0x7faf0c267a30 sp=0x7faf0c2679d8
MHeap_Grow(0x12b94c0, 0x3030)
    /usr/local/go/src/pkg/runtime/mheap.c:319 +0x5d fp=0x7faf0c267a70 sp=0x7faf0c267a30
MHeap_AllocLocked(0x12b94c0, 0x302e, 0x7faf00000000)
    /usr/local/go/src/pkg/runtime/mheap.c:222 +0x379 fp=0x7faf0c267ab0 sp=0x7faf0c267a70
runtime.MHeap_Alloc(0x12b94c0, 0x302e, 0x100000000)
    /usr/local/go/src/pkg/runtime/mheap.c:178 +0x7b fp=0x7faf0c267ad8 sp=0x7faf0c267ab0
largealloc(0x9, 0x7faf0c267b88)
    /usr/local/go/src/pkg/runtime/malloc.goc:224 +0xa2 fp=0x7faf0c267b20 sp=0x7faf0c267ad8
runtime.mallocgc(0x605c000, 0x0, 0x9)
    /usr/local/go/src/pkg/runtime/malloc.goc:169 +0xb6 fp=0x7faf0c267b88 sp=0x7faf0c267b20
runtime.stringtoslicebyte(0xc22d928000, 0x605b2c3, 0x0, 0x0, 0x0)
    /usr/local/go/src/pkg/runtime/string.goc:320 +0x65 fp=0x7faf0c267bb0 sp=0x7faf0c267b88
github.com/docker/docker/pkg/broadcastwriter.(*BroadcastWriter).Write(0xc208207c40, 0xc20821a000, 0x8000, 0x8000, 0x8000, 0x0, 0x0)
    /go/src/github.com/docker/docker/pkg/broadcastwriter/broadcastwriter.go:54 +0x315 fp=0x7faf0c267e40 sp=0x7faf0c267bb0
io.Copy(0x7faf0e518f08, 0xc208207c40, 0x7faf0e50c3a0, 0xc20803d5e0, 0x60532c3, 0x0, 0x0)
    /usr/local/go/src/pkg/io/io.go:355 +0x27b fp=0x7faf0c267f08 sp=0x7faf0c267e40
os/exec.func·003(0x0, 0x0)
    /usr/local/go/src/pkg/os/exec/exec.go:214 +0x7e fp=0x7faf0c267f68 sp=0x7faf0c267f08
os/exec.func·004(0xc20816dbc0)
    /usr/local/go/src/pkg/os/exec/exec.go:321 +0x2c fp=0x7faf0c267fa0 sp=0x7faf0c267f68
runtime.goexit()
    /usr/local/go/src/pkg/runtime/proc.c:1445 fp=0x7faf0c267fa8 sp=0x7faf0c267fa0
created by os/exec.(*Cmd).Start
    /usr/local/go/src/pkg/os/exec/exec.go:322 +0x931

system and version information

Docker version  : 1.4.0, build 4595d4f
Distributor ID  : Ubuntu
Description     : Ubuntu 14.04.1 LTS
Release         : 14.04
Codename        : trusty
Memory          : 1Gb
@amasad

This comment has been minimized.

Show comment
Hide comment
@amasad

amasad Mar 5, 2015

I start/have large number of containers that I talk to via stdin/stdout and it seems like I may be hitting this bug. Is there any fix in the works for this? Or should I just rewrite the protocol to talk over sockets?

amasad commented Mar 5, 2015

I start/have large number of containers that I talk to via stdin/stdout and it seems like I may be hitting this bug. Is there any fix in the works for this? Or should I just rewrite the protocol to talk over sockets?

@unclejack

This comment has been minimized.

Show comment
Hide comment
@unclejack

unclejack Mar 5, 2015

Contributor

@amasad #10347 has been proposed as a fix to this problem. Before that gets merged, you might want to come up with a workaround.

Contributor

unclejack commented Mar 5, 2015

@amasad #10347 has been proposed as a fix to this problem. Before that gets merged, you might want to come up with a workaround.

@LK4D4

This comment has been minimized.

Show comment
Hide comment
@LK4D4

LK4D4 Mar 5, 2015

Contributor

@amasad That would be cool if you'd try #10347 in your environment.

Contributor

LK4D4 commented Mar 5, 2015

@amasad That would be cool if you'd try #10347 in your environment.

@tfoote tfoote referenced this issue in ros-infrastructure/ros_buildfarm Mar 18, 2015

Closed

slaves running out of memory #45

@spf13 spf13 added kind/bug exp/expert and removed exp/expert bug labels Mar 21, 2015

@balboah

This comment has been minimized.

Show comment
Hide comment
@balboah

balboah Mar 23, 2015

I believe this is the problem when my service is logging too rapidly as well.
After some time I get

FORWARD -i docker0 -o docker0 -p tcp -s 172.17.0.8 --dport 1234 -d 172.17.0.3 -j ACCEPT:  (fork/exec /usr/sbin/iptables: cannot allocate memory)"

if I try to start a new container.
After restarting the docker daemon itself to get things working again.

These issues have been experienced on 1.4.1 as well as 1.5

balboah commented Mar 23, 2015

I believe this is the problem when my service is logging too rapidly as well.
After some time I get

FORWARD -i docker0 -o docker0 -p tcp -s 172.17.0.8 --dport 1234 -d 172.17.0.3 -j ACCEPT:  (fork/exec /usr/sbin/iptables: cannot allocate memory)"

if I try to start a new container.
After restarting the docker daemon itself to get things working again.

These issues have been experienced on 1.4.1 as well as 1.5

@andrewmichaelsmith

This comment has been minimized.

Show comment
Hide comment
@andrewmichaelsmith

andrewmichaelsmith Mar 23, 2015

Our services can be quite STDOUT heavy, we're finding we have to restart the docker daemon on our test environment about once a week because of this bug. Same sympton as @balboah (iptables cannot allocate memory).

We will probably work around it by just writing our logs straight to disk.

Our services can be quite STDOUT heavy, we're finding we have to restart the docker daemon on our test environment about once a week because of this bug. Same sympton as @balboah (iptables cannot allocate memory).

We will probably work around it by just writing our logs straight to disk.

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Mar 23, 2015

Member

@unclejack @LK4D4 Will the new logging drivers in 1.6 (#10568 and soon to be added #11458 and #11485) also help in this case (ie, containers producing heavy log output)?

Member

thaJeztah commented Mar 23, 2015

@unclejack @LK4D4 Will the new logging drivers in 1.6 (#10568 and soon to be added #11458 and #11485) also help in this case (ie, containers producing heavy log output)?

@LK4D4

This comment has been minimized.

Show comment
Hide comment
@LK4D4

LK4D4 Mar 23, 2015

Contributor

@thaJeztah If you'll rely on syslog instead of upstart/systemd in foreground capturing, then yes.

Contributor

LK4D4 commented Mar 23, 2015

@thaJeztah If you'll rely on syslog instead of upstart/systemd in foreground capturing, then yes.

@balboah

This comment has been minimized.

Show comment
Hide comment
@balboah

balboah Mar 23, 2015

In my case, I use logstash to collect docker logs for all running containers for me, which is nice as the service itself doesn't have to know about any external logging services.
But this doesn't play nice with this bug when there are chatty logs :)

balboah commented Mar 23, 2015

In my case, I use logstash to collect docker logs for all running containers for me, which is nice as the service itself doesn't have to know about any external logging services.
But this doesn't play nice with this bug when there are chatty logs :)

@jessfraz jessfraz closed this in #10347 Mar 23, 2015

@starkovv

This comment has been minimized.

Show comment
Hide comment
@maccman

This comment has been minimized.

Show comment
Hide comment
@maccman

maccman Mar 24, 2015

We've just been hit by this bug.

maccman commented Mar 24, 2015

We've just been hit by this bug.

@jessfraz

This comment has been minimized.

Show comment
Hide comment
@jessfraz

jessfraz Mar 24, 2015

Contributor

have you tried it with 1.6.0 RC1 #11635 (comment)

Contributor

jessfraz commented Mar 24, 2015

have you tried it with 1.6.0 RC1 #11635 (comment)

@tfoote

This comment has been minimized.

Show comment
Hide comment
@tfoote

tfoote Mar 27, 2015

I have tested 1.6.0-rc2 and this problem is not fixed.

# docker run -ti ubuntu:trusty bash
FATA[0028] Error response from daemon: Cannot start container 02533d9e9afcc81818d3d71aa7498e0f41e7904483fd041fa45a0638423a5d3a: [8] System error: fork/exec /usr/bin/docker: cannot allocate memory 

The memory usage has blown up.

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                   
 4374 root      20   0 11.618g 9.680g   8484 S  98.1 65.9 566:26.74 docker     

It had reached 6g of ram after running for 10 hours of CI type jobs. I quickly used the example from @enix above to run it up to 9g of ram which gets me to the error state on my system.

Here's my docker info and version.

root@ip-172-31-4-7:/tmp/testdocker# docker info
Containers: 612
Images: 8181
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 9489
Execution Driver: native-0.2
Kernel Version: 3.13.0-44-generic
Operating System: Ubuntu 14.04.1 LTS
CPUs: 4
Total Memory: 14.69 GiB
Name: ip-172-31-4-7
ID: 6KW6:XMAA:BLMZ:F5U2:QMGY:JNZR:FCZ5:FVDU:GJUS:KSHQ:FSR3:AO62
WARNING: No swap limit support
root@ip-172-31-4-7:/tmp/testdocker# docker version
Client version: 1.6.0-rc2
Client API version: 1.18
Go version (client): go1.4.2
Git commit (client): c5ee149
OS/Arch (client): linux/amd64
Server version: 1.6.0-rc2
Server API version: 1.18
Go version (server): go1.4.2
Git commit (server): c5ee149
OS/Arch (server): linux/amd64

tfoote commented Mar 27, 2015

I have tested 1.6.0-rc2 and this problem is not fixed.

# docker run -ti ubuntu:trusty bash
FATA[0028] Error response from daemon: Cannot start container 02533d9e9afcc81818d3d71aa7498e0f41e7904483fd041fa45a0638423a5d3a: [8] System error: fork/exec /usr/bin/docker: cannot allocate memory 

The memory usage has blown up.

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                   
 4374 root      20   0 11.618g 9.680g   8484 S  98.1 65.9 566:26.74 docker     

It had reached 6g of ram after running for 10 hours of CI type jobs. I quickly used the example from @enix above to run it up to 9g of ram which gets me to the error state on my system.

Here's my docker info and version.

root@ip-172-31-4-7:/tmp/testdocker# docker info
Containers: 612
Images: 8181
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 9489
Execution Driver: native-0.2
Kernel Version: 3.13.0-44-generic
Operating System: Ubuntu 14.04.1 LTS
CPUs: 4
Total Memory: 14.69 GiB
Name: ip-172-31-4-7
ID: 6KW6:XMAA:BLMZ:F5U2:QMGY:JNZR:FCZ5:FVDU:GJUS:KSHQ:FSR3:AO62
WARNING: No swap limit support
root@ip-172-31-4-7:/tmp/testdocker# docker version
Client version: 1.6.0-rc2
Client API version: 1.18
Go version (client): go1.4.2
Git commit (client): c5ee149
OS/Arch (client): linux/amd64
Server version: 1.6.0-rc2
Server API version: 1.18
Go version (server): go1.4.2
Git commit (server): c5ee149
OS/Arch (server): linux/amd64

@tfoote tfoote referenced this issue Mar 27, 2015

Merged

Bump v1.6.0 #11635

@Mulkave

This comment has been minimized.

Show comment
Hide comment
@Mulkave

Mulkave Mar 28, 2015

Same issue here:

$ sudo docker --debug info
Containers: 6
Images: 230
Storage Driver: devicemapper
 Pool Name: docker-202:32-131075-pool
 Pool Blocksize: 65.54 kB
 Backing Filesystem: extfs
 Data file:
 Metadata file:
 Data Space Used: 5.428 GB
 Data Space Total: 107.4 GB
 Metadata Space Used: 13.46 MB
 Metadata Space Total: 2.147 GB
 Udev Sync Supported: true
 Library Version: 1.02.89-RHEL6 (2014-09-01)
Execution Driver: native-0.2
Kernel Version: 3.14.33-26.47.amzn1.x86_64
Operating System: Amazon Linux AMI 2014.09
CPUs: 1
Total Memory: 3.68 GiB
Name: ip-172-31-56-219
ID: EPDP:KOMN:DHEL:6O3N:BJAT:3OE7:DWV5:UPBU:KI4X:U2OQ:DEKO:6RIU
Debug mode (server): false
Debug mode (client): true
Fds: 79
Goroutines: 71
EventsListeners: 0
Init SHA1: 4408d8ae1311042262432cfb5757dabc38e0e074
Init Path: /usr/libexec/docker/dockerinit
Docker Root Dir: /data/docker

Mulkave commented Mar 28, 2015

Same issue here:

$ sudo docker --debug info
Containers: 6
Images: 230
Storage Driver: devicemapper
 Pool Name: docker-202:32-131075-pool
 Pool Blocksize: 65.54 kB
 Backing Filesystem: extfs
 Data file:
 Metadata file:
 Data Space Used: 5.428 GB
 Data Space Total: 107.4 GB
 Metadata Space Used: 13.46 MB
 Metadata Space Total: 2.147 GB
 Udev Sync Supported: true
 Library Version: 1.02.89-RHEL6 (2014-09-01)
Execution Driver: native-0.2
Kernel Version: 3.14.33-26.47.amzn1.x86_64
Operating System: Amazon Linux AMI 2014.09
CPUs: 1
Total Memory: 3.68 GiB
Name: ip-172-31-56-219
ID: EPDP:KOMN:DHEL:6O3N:BJAT:3OE7:DWV5:UPBU:KI4X:U2OQ:DEKO:6RIU
Debug mode (server): false
Debug mode (client): true
Fds: 79
Goroutines: 71
EventsListeners: 0
Init SHA1: 4408d8ae1311042262432cfb5757dabc38e0e074
Init Path: /usr/libexec/docker/dockerinit
Docker Root Dir: /data/docker
@unclejack

This comment has been minimized.

Show comment
Hide comment
@unclejack

unclejack Mar 28, 2015

Contributor

The merged fix has lowered the rate at which memory usage increases, even for the specific case mentioned by @enix.

One particular use case which is currently seeing lower memory usage is the use of interactive containers running something (such as irssi) which isn't writing a lot to stdout (or to stderr, or to both), but were still seeing continuous unbounded buffer growth for containers which were running uninterrupted for one month or more.

Here are some recommendations on how to improve this right away:

  • for containers running only in the foreground or interactively, use --log-driver none
  • for containers running only in the background (-d) which require logs, use --log-driver syslog and don't attach to the container

This buffering needs to be enabled by default so that the applications running in containers don't get blocked on stdout or stderr. Some applications don't handle this properly and they crash. In the future, we might have options to disable this buffering in order to avoid this problem in environments running a lot of containers, but that will introduce some necessary restrictions.

I'll be sending some more PRs to help improve this. One of those PRs should be in 1.6.0. I'm very sorry for any frustration and disappointment caused by this problem.

Contributor

unclejack commented Mar 28, 2015

The merged fix has lowered the rate at which memory usage increases, even for the specific case mentioned by @enix.

One particular use case which is currently seeing lower memory usage is the use of interactive containers running something (such as irssi) which isn't writing a lot to stdout (or to stderr, or to both), but were still seeing continuous unbounded buffer growth for containers which were running uninterrupted for one month or more.

Here are some recommendations on how to improve this right away:

  • for containers running only in the foreground or interactively, use --log-driver none
  • for containers running only in the background (-d) which require logs, use --log-driver syslog and don't attach to the container

This buffering needs to be enabled by default so that the applications running in containers don't get blocked on stdout or stderr. Some applications don't handle this properly and they crash. In the future, we might have options to disable this buffering in order to avoid this problem in environments running a lot of containers, but that will introduce some necessary restrictions.

I'll be sending some more PRs to help improve this. One of those PRs should be in 1.6.0. I'm very sorry for any frustration and disappointment caused by this problem.

@miracle2k

This comment has been minimized.

Show comment
Hide comment
@miracle2k

miracle2k Apr 25, 2015

I've had trouble with docker's memory usage growing (until I can't start new containers anymore) for ages, and I suspect that my container logging is on the verbose side. What is puzzling to me is that I am running an old installation of docker 0.7.2, with one container that is truly writing a lot to stdout, and I've never had any problems with that machine whatsoever. It's not breaking a sweat.

I've had trouble with docker's memory usage growing (until I can't start new containers anymore) for ages, and I suspect that my container logging is on the verbose side. What is puzzling to me is that I am running an old installation of docker 0.7.2, with one container that is truly writing a lot to stdout, and I've never had any problems with that machine whatsoever. It's not breaking a sweat.

@Mulkave

This comment has been minimized.

Show comment
Hide comment
@Mulkave

Mulkave Apr 26, 2015

@miracle2k In the case of lots of logging (disk space usage instead of RAM) you might want to look into logrotate. Partially related to #7333

What you can do is add something like this to /etc/logrotate.d/docker (edit to something that makes more sense to your case):

/var/lib/docker/containers/*/*.log {

  missingok
  notifempty
  copytruncate
  rotate 1
  size 10M

}

Mulkave commented Apr 26, 2015

@miracle2k In the case of lots of logging (disk space usage instead of RAM) you might want to look into logrotate. Partially related to #7333

What you can do is add something like this to /etc/logrotate.d/docker (edit to something that makes more sense to your case):

/var/lib/docker/containers/*/*.log {

  missingok
  notifempty
  copytruncate
  rotate 1
  size 10M

}
@unclejack

This comment has been minimized.

Show comment
Hide comment
@unclejack

unclejack May 26, 2015

Contributor

Docker 1.7 will have all the fixes for these memory usage issues. Docker 1.6 has been released a while ago and the minor (1.6.x) versions will not incorporate such changes.

Should you be eager to try out these fixes, binaries can be found at https://master.dockerproject.com/.

Contributor

unclejack commented May 26, 2015

Docker 1.7 will have all the fixes for these memory usage issues. Docker 1.6 has been released a while ago and the minor (1.6.x) versions will not incorporate such changes.

Should you be eager to try out these fixes, binaries can be found at https://master.dockerproject.com/.

@tfoote

This comment has been minimized.

Show comment
Hide comment
@tfoote

tfoote Jun 17, 2015

I just tried Docker 1.7.0-rc5 and this is not completely resolved. The memory drops down when I upgrade to 1.7 and restart my CI jobs. Focusing the load on one machine you can see the memory usage growing significantly, breaking 4Gb of resident memory. In the next several hours I expect it to fail again once docker passes 6Gb of memory or so.
docker1 7-memory-inflation

root@ip-172-31-22-40:~# docker info
Containers: 781
Images: 5822
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 7400
 Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-48-generic
Operating System: Ubuntu 14.04.2 LTS
CPUs: 4
Total Memory: 14.69 GiB
Name: ip-172-31-22-40
ID: 2LNF:6XTL:GR55:VROE:PSKN:JCBS:7ITX:JUCE:BQ7N:RNYX:JAS5:YM53
WARNING: No swap limit support
root@ip-172-31-22-40:~# docker version
Client version: 1.7.0-rc5
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): f417602
OS/Arch (client): linux/amd64
Server version: 1.7.0-rc5
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): f417602
OS/Arch (server): linux/amd64

tfoote commented Jun 17, 2015

I just tried Docker 1.7.0-rc5 and this is not completely resolved. The memory drops down when I upgrade to 1.7 and restart my CI jobs. Focusing the load on one machine you can see the memory usage growing significantly, breaking 4Gb of resident memory. In the next several hours I expect it to fail again once docker passes 6Gb of memory or so.
docker1 7-memory-inflation

root@ip-172-31-22-40:~# docker info
Containers: 781
Images: 5822
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 7400
 Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-48-generic
Operating System: Ubuntu 14.04.2 LTS
CPUs: 4
Total Memory: 14.69 GiB
Name: ip-172-31-22-40
ID: 2LNF:6XTL:GR55:VROE:PSKN:JCBS:7ITX:JUCE:BQ7N:RNYX:JAS5:YM53
WARNING: No swap limit support
root@ip-172-31-22-40:~# docker version
Client version: 1.7.0-rc5
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): f417602
OS/Arch (client): linux/amd64
Server version: 1.7.0-rc5
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): f417602
OS/Arch (server): linux/amd64
@unclejack

This comment has been minimized.

Show comment
Hide comment
@unclejack

unclejack Jun 17, 2015

Contributor

@tfoote Please provide the following bits of information:

  1. How many containers did you run over those two hours when memory usage rose sharply (9 PM to 11 PM)?
  2. What logging driver are you using in Docker for those containers?
  3. How much is every container logging? Less than 1 MB? More than 5 MB?
  4. What other operations did you run on the Docker daemon? docker pull? docker push? docker rmi?
  5. Do you have a script you can use to reproduce the memory usage growth you've seen?

The syslog logging driver is known to have memory usage problems. This is going to be fixed as soon as possible.

After you provide the requested information, I'll try to reproduce and fix the problem. The problems reported on this issue have been fixed.

Contributor

unclejack commented Jun 17, 2015

@tfoote Please provide the following bits of information:

  1. How many containers did you run over those two hours when memory usage rose sharply (9 PM to 11 PM)?
  2. What logging driver are you using in Docker for those containers?
  3. How much is every container logging? Less than 1 MB? More than 5 MB?
  4. What other operations did you run on the Docker daemon? docker pull? docker push? docker rmi?
  5. Do you have a script you can use to reproduce the memory usage growth you've seen?

The syslog logging driver is known to have memory usage problems. This is going to be fixed as soon as possible.

After you provide the requested information, I'll try to reproduce and fix the problem. The problems reported on this issue have been fixed.

@adamkdean

This comment has been minimized.

Show comment
Hide comment
@adamkdean

adamkdean Jun 25, 2015

I'm currently having this issue. I'm running the latest Rancher on three Ubuntu 14.04.2 boxes and using Rancher, I'm running some standard Ubuntu 14.04.2 containers. If I leave this for three or four days, some of the servers consume all the memory and become unresponsive until being rebooted.

I'd provide more information but I'm unable to do any investigation as the machines have no memory remaining. All I could get before it died was this:

adam@devplatform06:~$ free -m
             total       used       free     shared    buffers     cached
Mem:          2001       1929         71          0          2         24
-/+ buffers/cache:       1903         98
Swap:         1019        754        265

I'm currently having this issue. I'm running the latest Rancher on three Ubuntu 14.04.2 boxes and using Rancher, I'm running some standard Ubuntu 14.04.2 containers. If I leave this for three or four days, some of the servers consume all the memory and become unresponsive until being rebooted.

I'd provide more information but I'm unable to do any investigation as the machines have no memory remaining. All I could get before it died was this:

adam@devplatform06:~$ free -m
             total       used       free     shared    buffers     cached
Mem:          2001       1929         71          0          2         24
-/+ buffers/cache:       1903         98
Swap:         1019        754        265
@unclejack

This comment has been minimized.

Show comment
Hide comment
@unclejack

unclejack Jun 25, 2015

Contributor

@adamkdean Can you tell me whether those containers log a lot or anything at all about what they're doing, please?
Knowing the docker version and the logging driver you're using would help as well. Docker 1.6.2 is most certainly going to run into this problem. Docker 1.7.0 shouldn't have as many problems with memory usage.

The syslog logging driver is still having problems, so that can be a factor as well.

Contributor

unclejack commented Jun 25, 2015

@adamkdean Can you tell me whether those containers log a lot or anything at all about what they're doing, please?
Knowing the docker version and the logging driver you're using would help as well. Docker 1.6.2 is most certainly going to run into this problem. Docker 1.7.0 shouldn't have as many problems with memory usage.

The syslog logging driver is still having problems, so that can be a factor as well.

@adamkdean

This comment has been minimized.

Show comment
Hide comment
@adamkdean

adamkdean Jun 25, 2015

Actually, now you mention it, rancher/agent:v0.7.9 is logging an awful lot. And these are running on 1.6.2. I'll try again with 1.7.0 and see how they do being left for a few days.

If there's anything else I can provide, let me know.

Actually, now you mention it, rancher/agent:v0.7.9 is logging an awful lot. And these are running on 1.6.2. I'll try again with 1.7.0 and see how they do being left for a few days.

If there's anything else I can provide, let me know.

@unclejack

This comment has been minimized.

Show comment
Hide comment
@unclejack

unclejack Jun 25, 2015

Contributor

@adamkdean Can you tell me what logging driver is being used, please? I haven't used rancher so far and I'm not sure what logging driver it's using. How many containers where you running?

Docker 1.6.2 has memory usage problems related to logging and it'll have these problems forever.

Contributor

unclejack commented Jun 25, 2015

@adamkdean Can you tell me what logging driver is being used, please? I haven't used rancher so far and I'm not sure what logging driver it's using. How many containers where you running?

Docker 1.6.2 has memory usage problems related to logging and it'll have these problems forever.

@miracle2k

This comment has been minimized.

Show comment
Hide comment
@miracle2k

miracle2k Jun 25, 2015

Note that a couple comments above it was recommended that the syslog driver be used instead of the json log. Has this recommendation changed with 1.7?

Note that a couple comments above it was recommended that the syslog driver be used instead of the json log. Has this recommendation changed with 1.7?

@unclejack

This comment has been minimized.

Show comment
Hide comment
@unclejack

unclejack Jun 25, 2015

Contributor

@miracle2k Some bugs were discovered in the used syslog package. I'll investigate that as well.

Contributor

unclejack commented Jun 25, 2015

@miracle2k Some bugs were discovered in the used syslog package. I'll investigate that as well.

@adamkdean

This comment has been minimized.

Show comment
Hide comment
@adamkdean

adamkdean Jun 25, 2015

@unclejack The logs look to be saved to /var/lib/dockers/containers/CID/CID-json.log which makes me think it's using the JSON log driver. Is that right?

FYI I've updated these servers to 1.7.0 now so I'll give that a go. They were running much, they were just for R&D but I'll leave them over the weekend and check them on Monday to see how they're doing.

@unclejack The logs look to be saved to /var/lib/dockers/containers/CID/CID-json.log which makes me think it's using the JSON log driver. Is that right?

FYI I've updated these servers to 1.7.0 now so I'll give that a go. They were running much, they were just for R&D but I'll leave them over the weekend and check them on Monday to see how they're doing.

@jbiel

This comment has been minimized.

Show comment
Hide comment
@jbiel

jbiel Jun 25, 2015

Contributor

@adamkdean - yes I believe that's correct. FWIW, we use the default logging driver (JSON) and we have an app that spits a lot of info to standard out. Prior to 1.7.0 we had to restart docker nightly in order to keep memory usage under control. We've had 1.7.0 running for ~7 days now and so far the memory leak issue has vastly improved, maybe resolved.

Contributor

jbiel commented Jun 25, 2015

@adamkdean - yes I believe that's correct. FWIW, we use the default logging driver (JSON) and we have an app that spits a lot of info to standard out. Prior to 1.7.0 we had to restart docker nightly in order to keep memory usage under control. We've had 1.7.0 running for ~7 days now and so far the memory leak issue has vastly improved, maybe resolved.

@adamkdean

This comment has been minimized.

Show comment
Hide comment
@adamkdean

adamkdean Jun 25, 2015

@jbiel That's positive news then, thanks for the information. It was definitely worrying to come back to a cluster to find the hosts unresponsive when all they were running was the clustering software itself!

@jbiel That's positive news then, thanks for the information. It was definitely worrying to come back to a cluster to find the hosts unresponsive when all they were running was the clustering software itself!

@chenchun

This comment has been minimized.

Show comment
Hide comment
@chenchun

chenchun Jul 17, 2015

Contributor

@enix , I think we could use ReadLine instead of ReadBytes to fix your case, see #14702

Contributor

chenchun commented Jul 17, 2015

@enix , I think we could use ReadLine instead of ReadBytes to fix your case, see #14702

@jbiel

This comment has been minimized.

Show comment
Hide comment
@jbiel

jbiel Jul 21, 2015

Contributor

This is 100% fixed for us; no more leaks. 👍

Contributor

jbiel commented Jul 21, 2015

This is 100% fixed for us; no more leaks. 👍

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Jul 21, 2015

Member

thanks for reporting back @jbiel! good to hear

Member

thaJeztah commented Jul 21, 2015

thanks for reporting back @jbiel! good to hear

@adamkdean

This comment has been minimized.

Show comment
Hide comment
@adamkdean

adamkdean Jul 21, 2015

Not a problem for us either. 100% fixed now.

Not a problem for us either. 100% fixed now.

@Trane9991

This comment has been minimized.

Show comment
Hide comment
@Trane9991

Trane9991 Dec 23, 2015

This issue still present with:

Containers: 39
Images: 118
Server Version: 1.9.1
Storage Driver: devicemapper
 Pool Name: docker-202:1-263804-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 10.74 GB
 Backing Filesystem: 
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 8.579 GB
 Data Space Total: 107.4 GB
 Data Space Available: 10.41 GB
 Metadata Space Used: 10.53 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.137 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.93-RHEL7 (2015-01-28)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 4.1.10-17.31.amzn1.x86_64
Operating System: Amazon Linux AMI 2015.09
CPUs: 2
Total Memory: 3.862 GiB
Name: 
ID: 7UQX:KJKK:NQPV:A7CO:HZ2G:BOMI:XQ2D:UKXF:JHDI:K7EB:R3OW:L25N

and

Client:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.2
 Git commit:   a34a1d5/1.9.1
 Built:        
 OS/Arch:      linux/amd64

Server:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.2
 Git commit:   a34a1d5/1.9.1
 Built:        
 OS/Arch:      linux/amd64

Running C endless loop in the container leads to docker crash:

#include <stdio.h>

int main() {
while(1)
{
printf("hello");
}
}

with default logging driver it happens like in 30 seconds, with --loging-driver=none docker crashes after 2-minutes:

fatal error: runtime: out of memory

runtime stack:
runtime.SysMap(0xc2e6590000, 0x2a00000, 0xc208cd0200, 0x1cbb098)
    /usr/lib/golang/src/runtime/mem_linux.c:149 +0x98
runtime.MHeap_SysAlloc(0x1cc0780, 0x2a00000, 0x4cac62)
    /usr/lib/golang/src/runtime/malloc.c:284 +0x124
runtime.MHeap_Alloc(0x1cc0780, 0x14f9, 0x100000000, 0xc2089466c0)
    /usr/lib/golang/src/runtime/mheap.c:240 +0x66

goroutine 3063 [running]:
runtime.switchtoM()
    /usr/lib/golang/src/runtime/asm_amd64.s:198 fp=0xc208106a70 sp=0xc208106a68
runtime.mallocgc(0x29f2000, 0x0, 0x3, 0x0)
    /usr/lib/golang/src/runtime/malloc.go:199 +0x9f3 fp=0xc208106b20 sp=0xc208106a70
runtime.rawmem(0x29f2000, 0x29f2000)
    /usr/lib/golang/src/runtime/malloc.go:371 +0x39 fp=0xc208106b48 sp=0xc208106b20
runtime.growslice(0xf2fc60, 0xc208ba6800, 0x400, 0x400, 0x29f1800, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/runtime/slice.go:83 +0x237 fp=0xc208106ba8 sp=0xc208106b48
github.com/docker/docker/pkg/tailfile.TailFile(0x7fc3956b40e0, 0xc20dc661f0, 0x100, 0x0, 0x0, 0x0, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/tailfile/tailfile.go:53 +0x738 fp=0xc208106cd8 sp=0xc208106ba8
github.com/docker/docker/daemon/logger/jsonfilelog.tailFile(0x7fc3956b40e0, 0xc20dc661f0, 0xc20dde7840, 0x100, 0x0, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/logger/jsonfilelog/jsonfilelog.go:311 +0x9e fp=0xc208106e18 sp=0xc208106cd8
github.com/docker/docker/daemon/logger/jsonfilelog.(*JSONFileLogger).readLogs(0xc20c41afc0, 0xc20dde7840, 0x0, 0x0, 0x0, 0x100, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/logger/jsonfilelog/jsonfilelog.go:283 +0x512 fp=0xc208106fa8 sp=0xc208106e18
runtime.goexit()
    /usr/lib/golang/src/runtime/asm_amd64.s:2232 +0x1 fp=0xc208106fb0 sp=0xc208106fa8
created by github.com/docker/docker/daemon/logger/jsonfilelog.(*JSONFileLogger).ReadLogs
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/logger/jsonfilelog/jsonfilelog.go:250 +0x140

goroutine 1 [chan receive, 7 minutes]:
main.(*DaemonCli).CmdDaemon(0xc2084fd2f0, 0xc20800a020, 0x2, 0x2, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/docker/daemon.go:308 +0x20a7
reflect.callMethod(0xc2084dff50, 0xc208dedce8)
    /usr/lib/golang/src/reflect/value.go:605 +0x179
reflect.methodValueCall(0xc20800a020, 0x2, 0x2, 0x1, 0xc2084dff50, 0x0, 0x0, 0xc2084dff50, 0x0, 0x4e169f, ...)
    /usr/lib/golang/src/reflect/asm_amd64.s:29 +0x36
github.com/docker/docker/cli.(*Cli).Run(0xc2084dfe90, 0xc20800a010, 0x3, 0x3, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/cli/cli.go:89 +0x38e
main.main()
    /builddir/build/BUILD/docker-1.9.1/docker/docker.go:65 +0x418

goroutine 5 [syscall, 8 minutes]:
os/signal.loop()
    /usr/lib/golang/src/os/signal/signal_unix.go:21 +0x1f
created by os/signal.init·1
    /usr/lib/golang/src/os/signal/signal_unix.go:27 +0x35

goroutine 17 [syscall, 8 minutes, locked to thread]:
runtime.goexit()
    /usr/lib/golang/src/runtime/asm_amd64.s:2232 +0x1

goroutine 13 [IO wait, 1 minutes]:
net.(*pollDesc).Wait(0xc208080bc0, 0x72, 0x0, 0x0)
    /usr/lib/golang/src/net/fd_poll_runtime.go:84 +0x47
net.(*pollDesc).WaitRead(0xc208080bc0, 0x0, 0x0)
    /usr/lib/golang/src/net/fd_poll_runtime.go:89 +0x43
net.(*netFD).accept(0xc208080b60, 0x0, 0x7fc398b4c580, 0xc208cb2038)
    /usr/lib/golang/src/net/fd_unix.go:419 +0x40b
net.(*UnixListener).AcceptUnix(0xc20808ea40, 0xc208cbe120, 0x0, 0x0)
    /usr/lib/golang/src/net/unixsock_posix.go:282 +0x56
net.(*UnixListener).Accept(0xc20808ea40, 0x0, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/net/unixsock_posix.go:293 +0x4c
github.com/docker/docker/pkg/listenbuffer.(*defaultListener).Accept(0xc20808ea60, 0x0, 0x0, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/listenbuffer/buffer.go:71 +0x67
net/http.(*Server).Serve(0xc20800c900, 0x7fc395696490, 0xc20808ea60, 0x0, 0x0)
    /usr/lib/golang/src/net/http/server.go:1728 +0x92
github.com/docker/docker/api/server.(*HTTPServer).Serve(0xc20808eca0, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/api/server/server.go:113 +0x4d
github.com/docker/docker/api/server.func·006(0xc20808eca0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/api/server/server.go:86 +0x163
created by github.com/docker/docker/api/server.(*Server).ServeAPI
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/api/server/server.go:90 +0x144

goroutine 10 [chan receive, 8 minutes]:
github.com/docker/docker/api/server.(*Server).ServeAPI(0xc20800b900, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/api/server/server.go:94 +0x1b6
main.func·007()
    /builddir/build/BUILD/docker-1.9.1/docker/daemon.go:255 +0x3b
created by main.(*DaemonCli).CmdDaemon
    /builddir/build/BUILD/docker-1.9.1/docker/daemon.go:261 +0x1571

goroutine 11 [chan receive, 8 minutes]:
github.com/docker/docker/daemon.func·030()
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/debugtrap_unix.go:17 +0x5c
created by github.com/docker/docker/daemon.setupDumpStackTrap
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/debugtrap_unix.go:20 +0x18e

goroutine 16 [select, 4 minutes]:
github.com/docker/libnetwork.(*controller).watchLoop(0xc2080a8000)
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/docker/libnetwork/store.go:398 +0x13f
created by github.com/docker/libnetwork.(*controller).startWatch
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/docker/libnetwork/store.go:412 +0xf0

goroutine 19 [IO wait, 7 minutes]:
net.(*pollDesc).Wait(0xc2088992c0, 0x72, 0x0, 0x0)
    /usr/lib/golang/src/net/fd_poll_runtime.go:84 +0x47
net.(*pollDesc).WaitRead(0xc2088992c0, 0x0, 0x0)
    /usr/lib/golang/src/net/fd_poll_runtime.go:89 +0x43
net.(*netFD).readMsg(0xc208899260, 0xc208877330, 0x10, 0x10, 0xc208871220, 0x1000, 0x1000, 0xffffffffffffffff, 0x0, 0x0, ...)
    /usr/lib/golang/src/net/fd_unix.go:296 +0x54e
net.(*UnixConn).ReadMsgUnix(0xc2080364c8, 0xc208877330, 0x10, 0x10, 0xc208871220, 0x1000, 0x1000, 0x51, 0xc2088770f4, 0x4, ...)
    /usr/lib/golang/src/net/unixsock_posix.go:147 +0x167
github.com/godbus/dbus.(*oobReader).Read(0xc208871200, 0xc208877330, 0x10, 0x10, 0xc208871200, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/godbus/dbus/transport_unix.go:21 +0xc5
io.ReadAtLeast(0x7fc3956a5698, 0xc208871200, 0xc208877330, 0x10, 0x10, 0x10, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/io/io.go:298 +0xf1
io.ReadFull(0x7fc3956a5698, 0xc208871200, 0xc208877330, 0x10, 0x10, 0x51, 0x0, 0x0)
    /usr/lib/golang/src/io/io.go:316 +0x6d
github.com/godbus/dbus.(*unixTransport).ReadMessage(0xc2088ab5c0, 0xc2085104b0, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/godbus/dbus/transport_unix.go:85 +0x1bf
github.com/godbus/dbus.(*Conn).inWorker(0xc208518480)
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/godbus/dbus/conn.go:248 +0x58
created by github.com/godbus/dbus.(*Conn).Auth
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/godbus/dbus/auth.go:118 +0xe84

goroutine 20 [chan receive, 7 minutes]:
github.com/godbus/dbus.(*Conn).outWorker(0xc208518480)
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/godbus/dbus/conn.go:370 +0x58
created by github.com/godbus/dbus.(*Conn).Auth
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/godbus/dbus/auth.go:119 +0xea1

goroutine 21 [chan receive, 7 minutes]:
github.com/docker/libnetwork/iptables.signalHandler()
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/docker/libnetwork/iptables/firewalld.go:92 +0x57
created by github.com/docker/libnetwork/iptables.FirewalldInit
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/docker/libnetwork/iptables/firewalld.go:48 +0x185

goroutine 32 [IO wait, 7 minutes]:
net.(*pollDesc).Wait(0xc2087a1b80, 0x72, 0x0, 0x0)
    /usr/lib/golang/src/net/fd_poll_runtime.go:84 +0x47
net.(*pollDesc).WaitRead(0xc2087a1b80, 0x0, 0x0)
    /usr/lib/golang/src/net/fd_poll_runtime.go:89 +0x43
net.(*netFD).accept(0xc2087a1b20, 0x0, 0x7fc398b4c580, 0xc20883f4e0)
    /usr/lib/golang/src/net/fd_unix.go:419 +0x40b
net.(*UnixListener).AcceptUnix(0xc2087f5b20, 0x10, 0x0, 0x0)
    /usr/lib/golang/src/net/unixsock_posix.go:282 +0x56
net.(*UnixListener).Accept(0xc2087f5b20, 0x0, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/net/unixsock_posix.go:293 +0x4c
github.com/docker/libnetwork.(*controller).acceptClientConnections(0xc2080a8000, 0xc2087a1ab0, 0x63, 0x7fc395695498, 0xc2087f5b20)
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/docker/libnetwork/sandbox_externalkey.go:138 +0x87
created by github.com/docker/libnetwork.(*controller).startExternalKeyListener
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/docker/libnetwork/sandbox_externalkey.go:132 +0x295

goroutine 45 [chan receive, 7 minutes]:
database/sql.(*DB).connectionOpener(0xc20881c960)
    /usr/lib/golang/src/database/sql/sql.go:589 +0x4c
created by database/sql.Open
    /usr/lib/golang/src/database/sql/sql.go:452 +0x31c

goroutine 46 [chan receive]:
github.com/docker/docker/daemon.(*statsCollector).run(0xc20884e2a0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/stats_collector_unix.go:93 +0xb2
created by github.com/docker/docker/daemon.newStatsCollector
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/stats_collector_unix.go:33 +0x116

goroutine 47 [chan receive, 3 minutes]:
github.com/docker/docker/daemon.(*Daemon).execCommandGC(0xc2080432c0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/exec.go:300 +0x8c
created by github.com/docker/docker/daemon.NewDaemon
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/daemon.go:827 +0x2dbc

goroutine 3082 [chan receive, 4 minutes]:
github.com/docker/docker/daemon.func·036()
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/monitor.go:250 +0x49
created by github.com/docker/docker/daemon.(*containerMonitor).callback
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/monitor.go:254 +0x100

goroutine 3081 [syscall, 4 minutes]:
syscall.Syscall(0x0, 0xe, 0xc20894ef50, 0x8, 0x0, 0x0, 0x50c421)
    /usr/lib/golang/src/syscall/asm_linux_amd64.s:21 +0x5
syscall.read(0xe, 0xc20894ef50, 0x8, 0x8, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/syscall/zsyscall_linux_amd64.go:867 +0x6e
syscall.Read(0xe, 0xc20894ef50, 0x8, 0x8, 0xc200000001, 0x0, 0x0)
    /usr/lib/golang/src/syscall/syscall_unix.go:136 +0x58
os.(*File).read(0xc20dc66078, 0xc20894ef50, 0x8, 0x8, 0x1caad00, 0x0, 0x0)
    /usr/lib/golang/src/os/file_unix.go:191 +0x5e
os.(*File).Read(0xc20dc66078, 0xc20894ef50, 0x8, 0x8, 0x9731cd, 0x0, 0x0)
    /usr/lib/golang/src/os/file.go:95 +0x91
github.com/opencontainers/runc/libcontainer.func·009()
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/opencontainers/runc/libcontainer/notify_linux.go:51 +0x18c
created by github.com/opencontainers/runc/libcontainer.notifyOnOOM
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/opencontainers/runc/libcontainer/notify_linux.go:61 +0x887

goroutine 3080 [select]:
github.com/docker/libnetwork/osl.removeUnusedPaths()
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/docker/libnetwork/osl/namespace_linux.go:73 +0x48b
created by github.com/docker/libnetwork/osl.createBasePath
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/docker/libnetwork/osl/namespace_linux.go:58 +0xb1

goroutine 3079 [semacquire, 1 minutes]:
sync.(*Cond).Wait(0xc20850edb0)
    /usr/lib/golang/src/sync/cond.go:62 +0x9e
io.(*pipe).read(0xc20850ed80, 0xc208500000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/io/pipe.go:52 +0x303
io.(*PipeReader).Read(0xc20dc660e8, 0xc208500000, 0x8000, 0x8000, 0x1, 0x0, 0x0)
    /usr/lib/golang/src/io/pipe.go:134 +0x5b
bufio.(*Reader).fill(0xc208bbd1a0)
    /usr/lib/golang/src/bufio/bufio.go:97 +0x1ce
bufio.(*Reader).WriteTo(0xc208bbd1a0, 0x7fc3942f9d58, 0xc208cbf040, 0x263, 0x0, 0x0)
    /usr/lib/golang/src/bufio/bufio.go:449 +0x27e
io.Copy(0x7fc3942f9d58, 0xc208cbf040, 0x7fc3956a6498, 0xc208bbd1a0, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/io/io.go:354 +0xb2
github.com/docker/docker/pkg/pools.Copy(0x7fc3942f9d58, 0xc208cbf040, 0x7fc3956b3a10, 0xc20dc660e8, 0xc20dc660e8, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/pools/pools.go:64 +0xa4
github.com/docker/docker/daemon/execdriver/native.func·005()
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/execdriver/native/driver.go:430 +0xba
created by github.com/docker/docker/daemon/execdriver/native.(*TtyConsole).AttachPipes
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/execdriver/native/driver.go:433 +0x171

goroutine 3078 [runnable]:
sync.(*Cond).Wait(0xc20850e128)
    /usr/lib/golang/src/sync/cond.go:62 +0x9e
io.(*pipe).write(0xc20850e0c0, 0xc208cb4000, 0xfff, 0x8000, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/io/pipe.go:94 +0x244
io.(*PipeWriter).Write(0xc20dc660d0, 0xc208cb4000, 0xfff, 0x8000, 0xc208cb4000, 0x0, 0x0)
    /usr/lib/golang/src/io/pipe.go:161 +0x5b
github.com/docker/docker/pkg/broadcaster.(*Unbuffered).Write(0xc208ce97c0, 0xc208cb4000, 0xfff, 0x8000, 0xfff, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/broadcaster/unbuffered.go:27 +0x135
bufio.(*Reader).writeBuf(0xc208bbd140, 0x7fc3956b3ac0, 0xc208ce97c0, 0xfff, 0x0, 0x0)
    /usr/lib/golang/src/bufio/bufio.go:463 +0xa5
bufio.(*Reader).WriteTo(0xc208bbd140, 0x7fc3956b3ac0, 0xc208ce97c0, 0x79304a2, 0x0, 0x0)
    /usr/lib/golang/src/bufio/bufio.go:444 +0x229
io.Copy(0x7fc3956b3ac0, 0xc208ce97c0, 0x7fc3956a6498, 0xc208bbd140, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/io/io.go:354 +0xb2
github.com/docker/docker/pkg/pools.Copy(0x7fc3956b3ac0, 0xc208ce97c0, 0x7fc3942f9d08, 0xc208cbf040, 0xc208cbf040, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/pools/pools.go:64 +0xa4
github.com/docker/docker/daemon/execdriver/native.func·004()
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/execdriver/native/driver.go:425 +0xf2
created by github.com/docker/docker/daemon/execdriver/native.(*TtyConsole).AttachPipes
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/execdriver/native/driver.go:426 +0xec

goroutine 3069 [semacquire, 1 minutes]:
sync.(*Cond).Wait(0xc20850ef30)
    /usr/lib/golang/src/sync/cond.go:62 +0x9e
io.(*pipe).read(0xc20850ef00, 0xc2089e8000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/io/pipe.go:52 +0x303
io.(*PipeReader).Read(0xc20dc66090, 0xc2089e8000, 0x8000, 0x8000, 0x1, 0x0, 0x0)
    /usr/lib/golang/src/io/pipe.go:134 +0x5b
github.com/docker/docker/daemon.copyEscapable(0x7fc3956b4068, 0xc20dc660f0, 0x7fc3942f8f58, 0xc20dc66090, 0x263, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/container.go:1047 +0xb9
github.com/docker/docker/daemon.func·015()
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/container.go:988 +0x260
created by github.com/docker/docker/daemon.attach
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/container.go:1001 +0x457

goroutine 3067 [semacquire]:
sync.(*Cond).Wait(0xc20850e0f0)
    /usr/lib/golang/src/sync/cond.go:62 +0x9e
io.(*pipe).read(0xc20850e0c0, 0xc208b8a000, 0x400, 0x400, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/io/pipe.go:52 +0x303
io.(*PipeReader).Read(0xc20dc660c8, 0xc208b8a000, 0x400, 0x400, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/io/pipe.go:134 +0x5b
github.com/docker/docker/pkg/ioutils.(*bufReader).drain(0xc208c9a510)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/ioutils/readers.go:90 +0x74
created by github.com/docker/docker/pkg/ioutils.NewBufReader
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/ioutils/readers.go:69 +0x351

goroutine 3071 [semacquire, 4 minutes]:
sync.(*Cond).Wait(0xc208c9a5d8)
    /usr/lib/golang/src/sync/cond.go:62 +0x9e
github.com/docker/docker/pkg/ioutils.(*bufReader).Read(0xc208c9a5a0, 0xc208bea000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/ioutils/readers.go:121 +0x18c
io.Copy(0x7fc3956af350, 0xc20dc66000, 0x7fc3956b3a98, 0xc208c9a5a0, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/io/io.go:362 +0x1f6
github.com/docker/docker/daemon.func·017(0x136bad0, 0x6, 0x7fc3956af350, 0xc20dc66000, 0x7fc3956b3a68, 0xc208c9a5a0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/container.go:1018 +0x245
created by github.com/docker/docker/daemon.attach
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/container.go:1029 +0x597

goroutine 3065 [chan receive, 4 minutes]:
github.com/docker/docker/daemon.(*Container).attachWithLogs(0xc208bc3400, 0x7fc3942f92d8, 0xc20dc66000, 0x7fc3956af350, 0xc20dc66000, 0x7fc3956af350, 0xc20dc66000, 0x100, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/container.go:931 +0x40d
github.com/docker/docker/daemon.(*Daemon).ContainerAttachWithLogs(0xc2080432c0, 0xc208c9a017, 0x40, 0xc20ddf8570, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/attach.go:46 +0x497
github.com/docker/docker/api/server/router/local.(*router).postContainersAttach(0xc208a0a920, 0x7fc3956af4a8, 0xc20ddf83f0, 0x7fc3956af470, 0xc20ddca000, 0xc208ce65b0, 0xc20ddf8270, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/api/server/router/local/container.go:476 +0x715
github.com/docker/docker/api/server/router/local.*router.(github.com/docker/docker/api/server/router/local.postContainersAttach)·fm(0x7fc3956af4a8, 0xc20ddf83f0, 0x7fc3956af470, 0xc20ddca000, 0xc208ce65b0, 0xc20ddf8270, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/api/server/router/local/local.go:138 +0x7b
github.com/docker/docker/api/server.func·004(0x7fc3956af4a8, 0xc20ddf83f0, 0x7fc3956af470, 0xc20ddca000, 0xc208ce65b0, 0xc20ddf8270, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/api/server/middleware.go:87 +0x7c7
github.com/docker/docker/api/server.func·003(0x7fc398b4cad0, 0xc20802af28, 0x7fc3956af470, 0xc20ddca000, 0xc208ce65b0, 0xc20ddf8270, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/api/server/middleware.go:66 +0x10e
github.com/docker/docker/api/server.func·002(0x7fc398b4cad0, 0xc20802af28, 0x7fc3956af470, 0xc20ddca000, 0xc208ce65b0, 0xc20ddf8270, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/api/server/middleware.go:49 +0x47c
github.com/docker/docker/api/server.func·001(0x7fc398b4cad0, 0xc20802af28, 0x7fc3956af470, 0xc20ddca000, 0xc208ce65b0, 0xc20ddf8270, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/api/server/middleware.go:27 +0x1dc
github.com/docker/docker/api/server.func·007(0x7fc3956af470, 0xc20ddca000, 0xc208ce65b0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/api/server/server.go:156 +0x265
net/http.HandlerFunc.ServeHTTP(0xc208b24580, 0x7fc3956af470, 0xc20ddca000, 0xc208ce65b0)
    /usr/lib/golang/src/net/http/server.go:1265 +0x41
github.com/gorilla/mux.(*Router).ServeHTTP(0xc208a031d0, 0x7fc3956af470, 0xc20ddca000, 0xc208ce65b0)
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/gorilla/mux/mux.go:98 +0x297
net/http.serverHandler.ServeHTTP(0xc20800c900, 0x7fc3956af470, 0xc20ddca000, 0xc208ce65b0)
    /usr/lib/golang/src/net/http/server.go:1703 +0x19a
net/http.(*conn).serve(0xc20ddca3c0)
    /usr/lib/golang/src/net/http/server.go:1204 +0xb57
created by net/http.(*Server).Serve
    /usr/lib/golang/src/net/http/server.go:1751 +0x35e

goroutine 3066 [IO wait, 1 minutes]:
net.(*pollDesc).Wait(0xc20dddac30, 0x72, 0x0, 0x0)
    /usr/lib/golang/src/net/fd_poll_runtime.go:84 +0x47
net.(*pollDesc).WaitRead(0xc20dddac30, 0x0, 0x0)
    /usr/lib/golang/src/net/fd_poll_runtime.go:89 +0x43
net.(*netFD).Read(0xc20dddabd0, 0xc208b40000, 0x8000, 0x8000, 0x0, 0x7fc398b4c580, 0xc208ce4060)
    /usr/lib/golang/src/net/fd_unix.go:242 +0x40f
net.(*conn).Read(0xc20dc66000, 0xc208b40000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/net/net.go:121 +0xdc
io.Copy(0x7fc3956b4068, 0xc20dc66098, 0x7fc3956af378, 0xc20dc66000, 0x263, 0x0, 0x0)
    /usr/lib/golang/src/io/io.go:362 +0x1f6
github.com/docker/docker/daemon.func·013()
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/container.go:927 +0x10f
created by github.com/docker/docker/daemon.(*Container).attachWithLogs
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/container.go:928 +0x368

goroutine 3075 [semacquire, 4 minutes]:
sync.(*Cond).Wait(0xc20850ecf0)
    /usr/lib/golang/src/sync/cond.go:62 +0x9e
io.(*pipe).read(0xc20850ecc0, 0xc208ba6c00, 0x400, 0x400, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/io/pipe.go:52 +0x303
io.(*PipeReader).Read(0xc20dc662a0, 0xc208ba6c00, 0x400, 0x400, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/io/pipe.go:134 +0x5b
github.com/docker/docker/pkg/ioutils.(*bufReader).drain(0xc208c69680)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/ioutils/readers.go:90 +0x74
created by github.com/docker/docker/pkg/ioutils.NewBufReader
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/ioutils/readers.go:69 +0x351

goroutine 3077 [semacquire, 4 minutes]:
sync.(*Cond).Wait(0xc208c696b8)
    /usr/lib/golang/src/sync/cond.go:62 +0x9e
github.com/docker/docker/pkg/ioutils.(*bufReader).Read(0xc208c69680, 0xc2089e2000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/ioutils/readers.go:121 +0x18c
bufio.(*Reader).fill(0xc208bbd0e0)
    /usr/lib/golang/src/bufio/bufio.go:97 +0x1ce
bufio.(*Reader).ReadSlice(0xc208bbd0e0, 0x4abc0a, 0x0, 0x0, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/bufio/bufio.go:295 +0x257
bufio.(*Reader).ReadBytes(0xc208bbd0e0, 0xc208c6960a, 0x0, 0x0, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/bufio/bufio.go:374 +0xd2
github.com/docker/docker/daemon/logger.(*Copier).copySrc(0xc208ca6800, 0x136bad0, 0x6, 0x7fc3956b3a98, 0xc208c69680)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/logger/copier.go:47 +0x96
created by github.com/docker/docker/daemon/logger.(*Copier).Run
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/logger/copier.go:38 +0x11c

goroutine 3076 [semacquire]:
sync.(*Cond).Wait(0xc208c69628)
    /usr/lib/golang/src/sync/cond.go:62 +0x9e
github.com/docker/docker/pkg/ioutils.(*bufReader).Read(0xc208c695f0, 0xc208a0dacf, 0x531, 0x531, 0x0, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/ioutils/readers.go:121 +0x18c
bufio.(*Reader).fill(0xc208bbd080)
    /usr/lib/golang/src/bufio/bufio.go:97 +0x1ce
bufio.(*Reader).ReadSlice(0xc208bbd080, 0xc29d84f00a, 0x0, 0x0, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/bufio/bufio.go:295 +0x257
bufio.(*Reader).ReadBytes(0xc208bbd080, 0xc208d1200a, 0x0, 0x0, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/bufio/bufio.go:374 +0xd2
github.com/docker/docker/daemon/logger.(*Copier).copySrc(0xc208ca6800, 0x136bb10, 0x6, 0x7fc3956b3a98, 0xc208c695f0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/logger/copier.go:47 +0x96
created by github.com/docker/docker/daemon/logger.(*Copier).Run
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/logger/copier.go:38 +0x11c

goroutine 3072 [semacquire, 4 minutes]:
sync.(*WaitGroup).Wait(0xc20dde64e0)
    /usr/lib/golang/src/sync/waitgroup.go:132 +0x169
github.com/docker/docker/daemon.func·018(0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/container.go:1032 +0x42
github.com/docker/docker/pkg/promise.func·001()
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/promise/promise.go:8 +0x2f
created by github.com/docker/docker/pkg/promise.Go
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/promise/promise.go:9 +0xfb

goroutine 3073 [syscall, 4 minutes]:
syscall.Syscall6(0x3d, 0xb8c, 0xc208def3ec, 0x0, 0xc208c681b0, 0x0, 0x0, 0x4d143c, 0x4d18c2, 0x1115620)
    /usr/lib/golang/src/syscall/asm_linux_amd64.s:46 +0x5
syscall.wait4(0xb8c, 0xc208def3ec, 0x0, 0xc208c681b0, 0x90, 0x0, 0x0)
    /usr/lib/golang/src/syscall/zsyscall_linux_amd64.go:124 +0x79
syscall.Wait4(0xb8c, 0xc208def434, 0x0, 0xc208c681b0, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/syscall/syscall_linux.go:224 +0x60
os.(*Process).wait(0xc208cbf400, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/os/exec_unix.go:22 +0x103
os.(*Process).Wait(0xc208cbf400, 0xc20dc66118, 0x0, 0x0)
    /usr/lib/golang/src/os/doc.go:45 +0x3a
os/exec.(*Cmd).Wait(0xc208b94640, 0x0, 0x0)
    /usr/lib/golang/src/os/exec/exec.go:364 +0x23c
github.com/opencontainers/runc/libcontainer.(*initProcess).wait(0xc208d12e10, 0x0, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/opencontainers/runc/libcontainer/process_linux.go:237 +0x3d
github.com/opencontainers/runc/libcontainer.Process.Wait(0xc208ce5870, 0x1, 0x1, 0xc208ca66c0, 0x3, 0x4, 0x1cb6c20, 0x0, 0x1cb6c20, 0x0, ...)
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/opencontainers/runc/libcontainer/process.go:60 +0x11d
github.com/opencontainers/runc/libcontainer.Process.Wait·fm(0xc208def9e8, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/execdriver/native/driver.go:179 +0x58
github.com/docker/docker/daemon/execdriver/native.(*Driver).Run(0xc2088a14a0, 0xc208ba6400, 0xc208caf500, 0xc20dc662b8, 0x1, 0x1, 0xc208ce5120, 0x0, 0x0, 0x0, ...)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/execdriver/native/driver.go:185 +0xa2c
github.com/docker/docker/daemon.(*Daemon).run(0xc2080432c0, 0xc208bc3400, 0xc208caf500, 0xc208ce5120, 0x0, 0x0, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/daemon.go:950 +0x246
github.com/docker/docker/daemon.(*containerMonitor).Start(0xc208c1ed20, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/monitor.go:145 +0x484
github.com/docker/docker/daemon.*containerMonitor.Start·fm(0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/container.go:784 +0x39
github.com/docker/docker/pkg/promise.func·001()
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/promise/promise.go:8 +0x2f
created by github.com/docker/docker/pkg/promise.Go
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/promise/promise.go:9 +0xfb

goroutine 3074 [semacquire]:
sync.(*Cond).Wait(0xc20850ec30)
    /usr/lib/golang/src/sync/cond.go:62 +0x9e
io.(*pipe).read(0xc20850ec00, 0xc208ba7400, 0x400, 0x400, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/io/pipe.go:52 +0x303
io.(*PipeReader).Read(0xc20dc66290, 0xc208ba7400, 0x400, 0x400, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/io/pipe.go:134 +0x5b
github.com/docker/docker/pkg/ioutils.(*bufReader).drain(0xc208c695f0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/ioutils/readers.go:90 +0x74
created by github.com/docker/docker/pkg/ioutils.NewBufReader
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/ioutils/readers.go:69 +0x351

goroutine 3064 [IO wait, 4 minutes]:
net.(*pollDesc).Wait(0xc20dddb8e0, 0x72, 0x0, 0x0)
    /usr/lib/golang/src/net/fd_poll_runtime.go:84 +0x47
net.(*pollDesc).WaitRead(0xc20dddb8e0, 0x0, 0x0)
    /usr/lib/golang/src/net/fd_poll_runtime.go:89 +0x43
net.(*netFD).Read(0xc20dddb880, 0xc208821000, 0x1000, 0x1000, 0x0, 0x7fc398b4c580, 0xc208ce4650)
    /usr/lib/golang/src/net/fd_unix.go:242 +0x40f
net.(*conn).Read(0xc20dc66008, 0xc208821000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/net/net.go:121 +0xdc
net/http.(*liveSwitchReader).Read(0xc20ddca228, 0xc208821000, 0x1000, 0x1000, 0x2, 0x0, 0x0)
    /usr/lib/golang/src/net/http/server.go:214 +0xab
io.(*LimitedReader).Read(0xc20dde71e0, 0xc208821000, 0x1000, 0x1000, 0x2, 0x0, 0x0)
    /usr/lib/golang/src/io/io.go:408 +0xce
bufio.(*Reader).fill(0xc20ddd9320)
    /usr/lib/golang/src/bufio/bufio.go:97 +0x1ce
bufio.(*Reader).ReadSlice(0xc20ddd9320, 0xc208debb0a, 0x0, 0x0, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/bufio/bufio.go:295 +0x257
bufio.(*Reader).ReadLine(0xc20ddd9320, 0x0, 0x0, 0x0, 0xc208bc6c00, 0x0, 0x0)
    /usr/lib/golang/src/bufio/bufio.go:324 +0x62
net/textproto.(*Reader).readLineSlice(0xc208cae0f0, 0x0, 0x0, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/net/textproto/reader.go:55 +0x9e
net/textproto.(*Reader).ReadLine(0xc208cae0f0, 0x0, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/net/textproto/reader.go:36 +0x4f
net/http.ReadRequest(0xc20ddd9320, 0xc208ce6ea0, 0x0, 0x0)
    /usr/lib/golang/src/net/http/request.go:598 +0xcb
net/http.(*conn).readRequest(0xc20ddca1e0, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/net/http/server.go:586 +0x26f
net/http.(*conn).serve(0xc20ddca1e0)
    /usr/lib/golang/src/net/http/server.go:1162 +0x69e
created by net/http.(*Server).Serve
    /usr/lib/golang/src/net/http/server.go:1751 +0x35e

goroutine 3068 [semacquire, 4 minutes]:
sync.(*Cond).Wait(0xc20850e270)
    /usr/lib/golang/src/sync/cond.go:62 +0x9e
io.(*pipe).read(0xc20850e240, 0xc208b8a400, 0x400, 0x400, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/io/pipe.go:52 +0x303
io.(*PipeReader).Read(0xc20dc660d8, 0xc208b8a400, 0x400, 0x400, 0xc20b91d080, 0x0, 0x0)
    /usr/lib/golang/src/io/pipe.go:134 +0x5b
github.com/docker/docker/pkg/ioutils.(*bufReader).drain(0xc208c9a5a0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/ioutils/readers.go:90 +0x74
created by github.com/docker/docker/pkg/ioutils.NewBufReader
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/ioutils/readers.go:69 +0x351

goroutine 3070 [IO wait]:
net.(*pollDesc).Wait(0xc20dddac30, 0x77, 0x0, 0x0)
    /usr/lib/golang/src/net/fd_poll_runtime.go:84 +0x47
net.(*pollDesc).WaitWrite(0xc20dddac30, 0x0, 0x0)
    /usr/lib/golang/src/net/fd_poll_runtime.go:93 +0x43
net.(*netFD).Write(0xc20dddabd0, 0xc208936000, 0x8000, 0x8000, 0x0, 0x7fc398b4c580, 0xc208ce4060)
    /usr/lib/golang/src/net/fd_unix.go:335 +0x5ee
net.(*conn).Write(0xc20dc66000, 0xc208936000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/net/net.go:129 +0xdc
io.Copy(0x7fc3956af350, 0xc20dc66000, 0x7fc3956b3a98, 0xc208c9a510, 0x27d59cb, 0x0, 0x0)
    /usr/lib/golang/src/io/io.go:364 +0x278
github.com/docker/docker/daemon.func·017(0x136bb10, 0x6, 0x7fc3956af350, 0xc20dc66000, 0x7fc3956b3a68, 0xc208c9a510)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/container.go:1018 +0x245
created by github.com/docker/docker/daemon.attach
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/container.go:1028 +0x545

goroutine 83 [chan receive, 7 minutes]:
github.com/docker/docker/pkg/signal.func·002()
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/signal/trap.go:29 +0x8f
created by github.com/docker/docker/pkg/signal.Trap
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/signal/trap.go:55 +0x250
\nWed Dec 23 08:41:50 UTC 2015\n
nohup: ignoring input

Also I noticed that Docker daemon not failing if you don't try to run any docker commands. For example:

I'm starting the same C endless loop, and if I try to kill this container in 10 seconds Docker daemon will fail. If I don't it will be still running.

This issue still present with:

Containers: 39
Images: 118
Server Version: 1.9.1
Storage Driver: devicemapper
 Pool Name: docker-202:1-263804-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 10.74 GB
 Backing Filesystem: 
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 8.579 GB
 Data Space Total: 107.4 GB
 Data Space Available: 10.41 GB
 Metadata Space Used: 10.53 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.137 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.93-RHEL7 (2015-01-28)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 4.1.10-17.31.amzn1.x86_64
Operating System: Amazon Linux AMI 2015.09
CPUs: 2
Total Memory: 3.862 GiB
Name: 
ID: 7UQX:KJKK:NQPV:A7CO:HZ2G:BOMI:XQ2D:UKXF:JHDI:K7EB:R3OW:L25N

and

Client:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.2
 Git commit:   a34a1d5/1.9.1
 Built:        
 OS/Arch:      linux/amd64

Server:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.2
 Git commit:   a34a1d5/1.9.1
 Built:        
 OS/Arch:      linux/amd64

Running C endless loop in the container leads to docker crash:

#include <stdio.h>

int main() {
while(1)
{
printf("hello");
}
}

with default logging driver it happens like in 30 seconds, with --loging-driver=none docker crashes after 2-minutes:

fatal error: runtime: out of memory

runtime stack:
runtime.SysMap(0xc2e6590000, 0x2a00000, 0xc208cd0200, 0x1cbb098)
    /usr/lib/golang/src/runtime/mem_linux.c:149 +0x98
runtime.MHeap_SysAlloc(0x1cc0780, 0x2a00000, 0x4cac62)
    /usr/lib/golang/src/runtime/malloc.c:284 +0x124
runtime.MHeap_Alloc(0x1cc0780, 0x14f9, 0x100000000, 0xc2089466c0)
    /usr/lib/golang/src/runtime/mheap.c:240 +0x66

goroutine 3063 [running]:
runtime.switchtoM()
    /usr/lib/golang/src/runtime/asm_amd64.s:198 fp=0xc208106a70 sp=0xc208106a68
runtime.mallocgc(0x29f2000, 0x0, 0x3, 0x0)
    /usr/lib/golang/src/runtime/malloc.go:199 +0x9f3 fp=0xc208106b20 sp=0xc208106a70
runtime.rawmem(0x29f2000, 0x29f2000)
    /usr/lib/golang/src/runtime/malloc.go:371 +0x39 fp=0xc208106b48 sp=0xc208106b20
runtime.growslice(0xf2fc60, 0xc208ba6800, 0x400, 0x400, 0x29f1800, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/runtime/slice.go:83 +0x237 fp=0xc208106ba8 sp=0xc208106b48
github.com/docker/docker/pkg/tailfile.TailFile(0x7fc3956b40e0, 0xc20dc661f0, 0x100, 0x0, 0x0, 0x0, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/tailfile/tailfile.go:53 +0x738 fp=0xc208106cd8 sp=0xc208106ba8
github.com/docker/docker/daemon/logger/jsonfilelog.tailFile(0x7fc3956b40e0, 0xc20dc661f0, 0xc20dde7840, 0x100, 0x0, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/logger/jsonfilelog/jsonfilelog.go:311 +0x9e fp=0xc208106e18 sp=0xc208106cd8
github.com/docker/docker/daemon/logger/jsonfilelog.(*JSONFileLogger).readLogs(0xc20c41afc0, 0xc20dde7840, 0x0, 0x0, 0x0, 0x100, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/logger/jsonfilelog/jsonfilelog.go:283 +0x512 fp=0xc208106fa8 sp=0xc208106e18
runtime.goexit()
    /usr/lib/golang/src/runtime/asm_amd64.s:2232 +0x1 fp=0xc208106fb0 sp=0xc208106fa8
created by github.com/docker/docker/daemon/logger/jsonfilelog.(*JSONFileLogger).ReadLogs
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/logger/jsonfilelog/jsonfilelog.go:250 +0x140

goroutine 1 [chan receive, 7 minutes]:
main.(*DaemonCli).CmdDaemon(0xc2084fd2f0, 0xc20800a020, 0x2, 0x2, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/docker/daemon.go:308 +0x20a7
reflect.callMethod(0xc2084dff50, 0xc208dedce8)
    /usr/lib/golang/src/reflect/value.go:605 +0x179
reflect.methodValueCall(0xc20800a020, 0x2, 0x2, 0x1, 0xc2084dff50, 0x0, 0x0, 0xc2084dff50, 0x0, 0x4e169f, ...)
    /usr/lib/golang/src/reflect/asm_amd64.s:29 +0x36
github.com/docker/docker/cli.(*Cli).Run(0xc2084dfe90, 0xc20800a010, 0x3, 0x3, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/cli/cli.go:89 +0x38e
main.main()
    /builddir/build/BUILD/docker-1.9.1/docker/docker.go:65 +0x418

goroutine 5 [syscall, 8 minutes]:
os/signal.loop()
    /usr/lib/golang/src/os/signal/signal_unix.go:21 +0x1f
created by os/signal.init·1
    /usr/lib/golang/src/os/signal/signal_unix.go:27 +0x35

goroutine 17 [syscall, 8 minutes, locked to thread]:
runtime.goexit()
    /usr/lib/golang/src/runtime/asm_amd64.s:2232 +0x1

goroutine 13 [IO wait, 1 minutes]:
net.(*pollDesc).Wait(0xc208080bc0, 0x72, 0x0, 0x0)
    /usr/lib/golang/src/net/fd_poll_runtime.go:84 +0x47
net.(*pollDesc).WaitRead(0xc208080bc0, 0x0, 0x0)
    /usr/lib/golang/src/net/fd_poll_runtime.go:89 +0x43
net.(*netFD).accept(0xc208080b60, 0x0, 0x7fc398b4c580, 0xc208cb2038)
    /usr/lib/golang/src/net/fd_unix.go:419 +0x40b
net.(*UnixListener).AcceptUnix(0xc20808ea40, 0xc208cbe120, 0x0, 0x0)
    /usr/lib/golang/src/net/unixsock_posix.go:282 +0x56
net.(*UnixListener).Accept(0xc20808ea40, 0x0, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/net/unixsock_posix.go:293 +0x4c
github.com/docker/docker/pkg/listenbuffer.(*defaultListener).Accept(0xc20808ea60, 0x0, 0x0, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/listenbuffer/buffer.go:71 +0x67
net/http.(*Server).Serve(0xc20800c900, 0x7fc395696490, 0xc20808ea60, 0x0, 0x0)
    /usr/lib/golang/src/net/http/server.go:1728 +0x92
github.com/docker/docker/api/server.(*HTTPServer).Serve(0xc20808eca0, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/api/server/server.go:113 +0x4d
github.com/docker/docker/api/server.func·006(0xc20808eca0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/api/server/server.go:86 +0x163
created by github.com/docker/docker/api/server.(*Server).ServeAPI
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/api/server/server.go:90 +0x144

goroutine 10 [chan receive, 8 minutes]:
github.com/docker/docker/api/server.(*Server).ServeAPI(0xc20800b900, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/api/server/server.go:94 +0x1b6
main.func·007()
    /builddir/build/BUILD/docker-1.9.1/docker/daemon.go:255 +0x3b
created by main.(*DaemonCli).CmdDaemon
    /builddir/build/BUILD/docker-1.9.1/docker/daemon.go:261 +0x1571

goroutine 11 [chan receive, 8 minutes]:
github.com/docker/docker/daemon.func·030()
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/debugtrap_unix.go:17 +0x5c
created by github.com/docker/docker/daemon.setupDumpStackTrap
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/debugtrap_unix.go:20 +0x18e

goroutine 16 [select, 4 minutes]:
github.com/docker/libnetwork.(*controller).watchLoop(0xc2080a8000)
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/docker/libnetwork/store.go:398 +0x13f
created by github.com/docker/libnetwork.(*controller).startWatch
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/docker/libnetwork/store.go:412 +0xf0

goroutine 19 [IO wait, 7 minutes]:
net.(*pollDesc).Wait(0xc2088992c0, 0x72, 0x0, 0x0)
    /usr/lib/golang/src/net/fd_poll_runtime.go:84 +0x47
net.(*pollDesc).WaitRead(0xc2088992c0, 0x0, 0x0)
    /usr/lib/golang/src/net/fd_poll_runtime.go:89 +0x43
net.(*netFD).readMsg(0xc208899260, 0xc208877330, 0x10, 0x10, 0xc208871220, 0x1000, 0x1000, 0xffffffffffffffff, 0x0, 0x0, ...)
    /usr/lib/golang/src/net/fd_unix.go:296 +0x54e
net.(*UnixConn).ReadMsgUnix(0xc2080364c8, 0xc208877330, 0x10, 0x10, 0xc208871220, 0x1000, 0x1000, 0x51, 0xc2088770f4, 0x4, ...)
    /usr/lib/golang/src/net/unixsock_posix.go:147 +0x167
github.com/godbus/dbus.(*oobReader).Read(0xc208871200, 0xc208877330, 0x10, 0x10, 0xc208871200, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/godbus/dbus/transport_unix.go:21 +0xc5
io.ReadAtLeast(0x7fc3956a5698, 0xc208871200, 0xc208877330, 0x10, 0x10, 0x10, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/io/io.go:298 +0xf1
io.ReadFull(0x7fc3956a5698, 0xc208871200, 0xc208877330, 0x10, 0x10, 0x51, 0x0, 0x0)
    /usr/lib/golang/src/io/io.go:316 +0x6d
github.com/godbus/dbus.(*unixTransport).ReadMessage(0xc2088ab5c0, 0xc2085104b0, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/godbus/dbus/transport_unix.go:85 +0x1bf
github.com/godbus/dbus.(*Conn).inWorker(0xc208518480)
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/godbus/dbus/conn.go:248 +0x58
created by github.com/godbus/dbus.(*Conn).Auth
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/godbus/dbus/auth.go:118 +0xe84

goroutine 20 [chan receive, 7 minutes]:
github.com/godbus/dbus.(*Conn).outWorker(0xc208518480)
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/godbus/dbus/conn.go:370 +0x58
created by github.com/godbus/dbus.(*Conn).Auth
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/godbus/dbus/auth.go:119 +0xea1

goroutine 21 [chan receive, 7 minutes]:
github.com/docker/libnetwork/iptables.signalHandler()
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/docker/libnetwork/iptables/firewalld.go:92 +0x57
created by github.com/docker/libnetwork/iptables.FirewalldInit
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/docker/libnetwork/iptables/firewalld.go:48 +0x185

goroutine 32 [IO wait, 7 minutes]:
net.(*pollDesc).Wait(0xc2087a1b80, 0x72, 0x0, 0x0)
    /usr/lib/golang/src/net/fd_poll_runtime.go:84 +0x47
net.(*pollDesc).WaitRead(0xc2087a1b80, 0x0, 0x0)
    /usr/lib/golang/src/net/fd_poll_runtime.go:89 +0x43
net.(*netFD).accept(0xc2087a1b20, 0x0, 0x7fc398b4c580, 0xc20883f4e0)
    /usr/lib/golang/src/net/fd_unix.go:419 +0x40b
net.(*UnixListener).AcceptUnix(0xc2087f5b20, 0x10, 0x0, 0x0)
    /usr/lib/golang/src/net/unixsock_posix.go:282 +0x56
net.(*UnixListener).Accept(0xc2087f5b20, 0x0, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/net/unixsock_posix.go:293 +0x4c
github.com/docker/libnetwork.(*controller).acceptClientConnections(0xc2080a8000, 0xc2087a1ab0, 0x63, 0x7fc395695498, 0xc2087f5b20)
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/docker/libnetwork/sandbox_externalkey.go:138 +0x87
created by github.com/docker/libnetwork.(*controller).startExternalKeyListener
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/docker/libnetwork/sandbox_externalkey.go:132 +0x295

goroutine 45 [chan receive, 7 minutes]:
database/sql.(*DB).connectionOpener(0xc20881c960)
    /usr/lib/golang/src/database/sql/sql.go:589 +0x4c
created by database/sql.Open
    /usr/lib/golang/src/database/sql/sql.go:452 +0x31c

goroutine 46 [chan receive]:
github.com/docker/docker/daemon.(*statsCollector).run(0xc20884e2a0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/stats_collector_unix.go:93 +0xb2
created by github.com/docker/docker/daemon.newStatsCollector
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/stats_collector_unix.go:33 +0x116

goroutine 47 [chan receive, 3 minutes]:
github.com/docker/docker/daemon.(*Daemon).execCommandGC(0xc2080432c0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/exec.go:300 +0x8c
created by github.com/docker/docker/daemon.NewDaemon
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/daemon.go:827 +0x2dbc

goroutine 3082 [chan receive, 4 minutes]:
github.com/docker/docker/daemon.func·036()
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/monitor.go:250 +0x49
created by github.com/docker/docker/daemon.(*containerMonitor).callback
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/monitor.go:254 +0x100

goroutine 3081 [syscall, 4 minutes]:
syscall.Syscall(0x0, 0xe, 0xc20894ef50, 0x8, 0x0, 0x0, 0x50c421)
    /usr/lib/golang/src/syscall/asm_linux_amd64.s:21 +0x5
syscall.read(0xe, 0xc20894ef50, 0x8, 0x8, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/syscall/zsyscall_linux_amd64.go:867 +0x6e
syscall.Read(0xe, 0xc20894ef50, 0x8, 0x8, 0xc200000001, 0x0, 0x0)
    /usr/lib/golang/src/syscall/syscall_unix.go:136 +0x58
os.(*File).read(0xc20dc66078, 0xc20894ef50, 0x8, 0x8, 0x1caad00, 0x0, 0x0)
    /usr/lib/golang/src/os/file_unix.go:191 +0x5e
os.(*File).Read(0xc20dc66078, 0xc20894ef50, 0x8, 0x8, 0x9731cd, 0x0, 0x0)
    /usr/lib/golang/src/os/file.go:95 +0x91
github.com/opencontainers/runc/libcontainer.func·009()
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/opencontainers/runc/libcontainer/notify_linux.go:51 +0x18c
created by github.com/opencontainers/runc/libcontainer.notifyOnOOM
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/opencontainers/runc/libcontainer/notify_linux.go:61 +0x887

goroutine 3080 [select]:
github.com/docker/libnetwork/osl.removeUnusedPaths()
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/docker/libnetwork/osl/namespace_linux.go:73 +0x48b
created by github.com/docker/libnetwork/osl.createBasePath
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/docker/libnetwork/osl/namespace_linux.go:58 +0xb1

goroutine 3079 [semacquire, 1 minutes]:
sync.(*Cond).Wait(0xc20850edb0)
    /usr/lib/golang/src/sync/cond.go:62 +0x9e
io.(*pipe).read(0xc20850ed80, 0xc208500000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/io/pipe.go:52 +0x303
io.(*PipeReader).Read(0xc20dc660e8, 0xc208500000, 0x8000, 0x8000, 0x1, 0x0, 0x0)
    /usr/lib/golang/src/io/pipe.go:134 +0x5b
bufio.(*Reader).fill(0xc208bbd1a0)
    /usr/lib/golang/src/bufio/bufio.go:97 +0x1ce
bufio.(*Reader).WriteTo(0xc208bbd1a0, 0x7fc3942f9d58, 0xc208cbf040, 0x263, 0x0, 0x0)
    /usr/lib/golang/src/bufio/bufio.go:449 +0x27e
io.Copy(0x7fc3942f9d58, 0xc208cbf040, 0x7fc3956a6498, 0xc208bbd1a0, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/io/io.go:354 +0xb2
github.com/docker/docker/pkg/pools.Copy(0x7fc3942f9d58, 0xc208cbf040, 0x7fc3956b3a10, 0xc20dc660e8, 0xc20dc660e8, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/pools/pools.go:64 +0xa4
github.com/docker/docker/daemon/execdriver/native.func·005()
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/execdriver/native/driver.go:430 +0xba
created by github.com/docker/docker/daemon/execdriver/native.(*TtyConsole).AttachPipes
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/execdriver/native/driver.go:433 +0x171

goroutine 3078 [runnable]:
sync.(*Cond).Wait(0xc20850e128)
    /usr/lib/golang/src/sync/cond.go:62 +0x9e
io.(*pipe).write(0xc20850e0c0, 0xc208cb4000, 0xfff, 0x8000, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/io/pipe.go:94 +0x244
io.(*PipeWriter).Write(0xc20dc660d0, 0xc208cb4000, 0xfff, 0x8000, 0xc208cb4000, 0x0, 0x0)
    /usr/lib/golang/src/io/pipe.go:161 +0x5b
github.com/docker/docker/pkg/broadcaster.(*Unbuffered).Write(0xc208ce97c0, 0xc208cb4000, 0xfff, 0x8000, 0xfff, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/broadcaster/unbuffered.go:27 +0x135
bufio.(*Reader).writeBuf(0xc208bbd140, 0x7fc3956b3ac0, 0xc208ce97c0, 0xfff, 0x0, 0x0)
    /usr/lib/golang/src/bufio/bufio.go:463 +0xa5
bufio.(*Reader).WriteTo(0xc208bbd140, 0x7fc3956b3ac0, 0xc208ce97c0, 0x79304a2, 0x0, 0x0)
    /usr/lib/golang/src/bufio/bufio.go:444 +0x229
io.Copy(0x7fc3956b3ac0, 0xc208ce97c0, 0x7fc3956a6498, 0xc208bbd140, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/io/io.go:354 +0xb2
github.com/docker/docker/pkg/pools.Copy(0x7fc3956b3ac0, 0xc208ce97c0, 0x7fc3942f9d08, 0xc208cbf040, 0xc208cbf040, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/pools/pools.go:64 +0xa4
github.com/docker/docker/daemon/execdriver/native.func·004()
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/execdriver/native/driver.go:425 +0xf2
created by github.com/docker/docker/daemon/execdriver/native.(*TtyConsole).AttachPipes
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/execdriver/native/driver.go:426 +0xec

goroutine 3069 [semacquire, 1 minutes]:
sync.(*Cond).Wait(0xc20850ef30)
    /usr/lib/golang/src/sync/cond.go:62 +0x9e
io.(*pipe).read(0xc20850ef00, 0xc2089e8000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/io/pipe.go:52 +0x303
io.(*PipeReader).Read(0xc20dc66090, 0xc2089e8000, 0x8000, 0x8000, 0x1, 0x0, 0x0)
    /usr/lib/golang/src/io/pipe.go:134 +0x5b
github.com/docker/docker/daemon.copyEscapable(0x7fc3956b4068, 0xc20dc660f0, 0x7fc3942f8f58, 0xc20dc66090, 0x263, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/container.go:1047 +0xb9
github.com/docker/docker/daemon.func·015()
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/container.go:988 +0x260
created by github.com/docker/docker/daemon.attach
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/container.go:1001 +0x457

goroutine 3067 [semacquire]:
sync.(*Cond).Wait(0xc20850e0f0)
    /usr/lib/golang/src/sync/cond.go:62 +0x9e
io.(*pipe).read(0xc20850e0c0, 0xc208b8a000, 0x400, 0x400, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/io/pipe.go:52 +0x303
io.(*PipeReader).Read(0xc20dc660c8, 0xc208b8a000, 0x400, 0x400, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/io/pipe.go:134 +0x5b
github.com/docker/docker/pkg/ioutils.(*bufReader).drain(0xc208c9a510)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/ioutils/readers.go:90 +0x74
created by github.com/docker/docker/pkg/ioutils.NewBufReader
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/ioutils/readers.go:69 +0x351

goroutine 3071 [semacquire, 4 minutes]:
sync.(*Cond).Wait(0xc208c9a5d8)
    /usr/lib/golang/src/sync/cond.go:62 +0x9e
github.com/docker/docker/pkg/ioutils.(*bufReader).Read(0xc208c9a5a0, 0xc208bea000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/ioutils/readers.go:121 +0x18c
io.Copy(0x7fc3956af350, 0xc20dc66000, 0x7fc3956b3a98, 0xc208c9a5a0, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/io/io.go:362 +0x1f6
github.com/docker/docker/daemon.func·017(0x136bad0, 0x6, 0x7fc3956af350, 0xc20dc66000, 0x7fc3956b3a68, 0xc208c9a5a0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/container.go:1018 +0x245
created by github.com/docker/docker/daemon.attach
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/container.go:1029 +0x597

goroutine 3065 [chan receive, 4 minutes]:
github.com/docker/docker/daemon.(*Container).attachWithLogs(0xc208bc3400, 0x7fc3942f92d8, 0xc20dc66000, 0x7fc3956af350, 0xc20dc66000, 0x7fc3956af350, 0xc20dc66000, 0x100, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/container.go:931 +0x40d
github.com/docker/docker/daemon.(*Daemon).ContainerAttachWithLogs(0xc2080432c0, 0xc208c9a017, 0x40, 0xc20ddf8570, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/attach.go:46 +0x497
github.com/docker/docker/api/server/router/local.(*router).postContainersAttach(0xc208a0a920, 0x7fc3956af4a8, 0xc20ddf83f0, 0x7fc3956af470, 0xc20ddca000, 0xc208ce65b0, 0xc20ddf8270, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/api/server/router/local/container.go:476 +0x715
github.com/docker/docker/api/server/router/local.*router.(github.com/docker/docker/api/server/router/local.postContainersAttach)·fm(0x7fc3956af4a8, 0xc20ddf83f0, 0x7fc3956af470, 0xc20ddca000, 0xc208ce65b0, 0xc20ddf8270, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/api/server/router/local/local.go:138 +0x7b
github.com/docker/docker/api/server.func·004(0x7fc3956af4a8, 0xc20ddf83f0, 0x7fc3956af470, 0xc20ddca000, 0xc208ce65b0, 0xc20ddf8270, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/api/server/middleware.go:87 +0x7c7
github.com/docker/docker/api/server.func·003(0x7fc398b4cad0, 0xc20802af28, 0x7fc3956af470, 0xc20ddca000, 0xc208ce65b0, 0xc20ddf8270, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/api/server/middleware.go:66 +0x10e
github.com/docker/docker/api/server.func·002(0x7fc398b4cad0, 0xc20802af28, 0x7fc3956af470, 0xc20ddca000, 0xc208ce65b0, 0xc20ddf8270, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/api/server/middleware.go:49 +0x47c
github.com/docker/docker/api/server.func·001(0x7fc398b4cad0, 0xc20802af28, 0x7fc3956af470, 0xc20ddca000, 0xc208ce65b0, 0xc20ddf8270, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/api/server/middleware.go:27 +0x1dc
github.com/docker/docker/api/server.func·007(0x7fc3956af470, 0xc20ddca000, 0xc208ce65b0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/api/server/server.go:156 +0x265
net/http.HandlerFunc.ServeHTTP(0xc208b24580, 0x7fc3956af470, 0xc20ddca000, 0xc208ce65b0)
    /usr/lib/golang/src/net/http/server.go:1265 +0x41
github.com/gorilla/mux.(*Router).ServeHTTP(0xc208a031d0, 0x7fc3956af470, 0xc20ddca000, 0xc208ce65b0)
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/gorilla/mux/mux.go:98 +0x297
net/http.serverHandler.ServeHTTP(0xc20800c900, 0x7fc3956af470, 0xc20ddca000, 0xc208ce65b0)
    /usr/lib/golang/src/net/http/server.go:1703 +0x19a
net/http.(*conn).serve(0xc20ddca3c0)
    /usr/lib/golang/src/net/http/server.go:1204 +0xb57
created by net/http.(*Server).Serve
    /usr/lib/golang/src/net/http/server.go:1751 +0x35e

goroutine 3066 [IO wait, 1 minutes]:
net.(*pollDesc).Wait(0xc20dddac30, 0x72, 0x0, 0x0)
    /usr/lib/golang/src/net/fd_poll_runtime.go:84 +0x47
net.(*pollDesc).WaitRead(0xc20dddac30, 0x0, 0x0)
    /usr/lib/golang/src/net/fd_poll_runtime.go:89 +0x43
net.(*netFD).Read(0xc20dddabd0, 0xc208b40000, 0x8000, 0x8000, 0x0, 0x7fc398b4c580, 0xc208ce4060)
    /usr/lib/golang/src/net/fd_unix.go:242 +0x40f
net.(*conn).Read(0xc20dc66000, 0xc208b40000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/net/net.go:121 +0xdc
io.Copy(0x7fc3956b4068, 0xc20dc66098, 0x7fc3956af378, 0xc20dc66000, 0x263, 0x0, 0x0)
    /usr/lib/golang/src/io/io.go:362 +0x1f6
github.com/docker/docker/daemon.func·013()
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/container.go:927 +0x10f
created by github.com/docker/docker/daemon.(*Container).attachWithLogs
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/container.go:928 +0x368

goroutine 3075 [semacquire, 4 minutes]:
sync.(*Cond).Wait(0xc20850ecf0)
    /usr/lib/golang/src/sync/cond.go:62 +0x9e
io.(*pipe).read(0xc20850ecc0, 0xc208ba6c00, 0x400, 0x400, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/io/pipe.go:52 +0x303
io.(*PipeReader).Read(0xc20dc662a0, 0xc208ba6c00, 0x400, 0x400, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/io/pipe.go:134 +0x5b
github.com/docker/docker/pkg/ioutils.(*bufReader).drain(0xc208c69680)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/ioutils/readers.go:90 +0x74
created by github.com/docker/docker/pkg/ioutils.NewBufReader
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/ioutils/readers.go:69 +0x351

goroutine 3077 [semacquire, 4 minutes]:
sync.(*Cond).Wait(0xc208c696b8)
    /usr/lib/golang/src/sync/cond.go:62 +0x9e
github.com/docker/docker/pkg/ioutils.(*bufReader).Read(0xc208c69680, 0xc2089e2000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/ioutils/readers.go:121 +0x18c
bufio.(*Reader).fill(0xc208bbd0e0)
    /usr/lib/golang/src/bufio/bufio.go:97 +0x1ce
bufio.(*Reader).ReadSlice(0xc208bbd0e0, 0x4abc0a, 0x0, 0x0, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/bufio/bufio.go:295 +0x257
bufio.(*Reader).ReadBytes(0xc208bbd0e0, 0xc208c6960a, 0x0, 0x0, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/bufio/bufio.go:374 +0xd2
github.com/docker/docker/daemon/logger.(*Copier).copySrc(0xc208ca6800, 0x136bad0, 0x6, 0x7fc3956b3a98, 0xc208c69680)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/logger/copier.go:47 +0x96
created by github.com/docker/docker/daemon/logger.(*Copier).Run
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/logger/copier.go:38 +0x11c

goroutine 3076 [semacquire]:
sync.(*Cond).Wait(0xc208c69628)
    /usr/lib/golang/src/sync/cond.go:62 +0x9e
github.com/docker/docker/pkg/ioutils.(*bufReader).Read(0xc208c695f0, 0xc208a0dacf, 0x531, 0x531, 0x0, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/ioutils/readers.go:121 +0x18c
bufio.(*Reader).fill(0xc208bbd080)
    /usr/lib/golang/src/bufio/bufio.go:97 +0x1ce
bufio.(*Reader).ReadSlice(0xc208bbd080, 0xc29d84f00a, 0x0, 0x0, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/bufio/bufio.go:295 +0x257
bufio.(*Reader).ReadBytes(0xc208bbd080, 0xc208d1200a, 0x0, 0x0, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/bufio/bufio.go:374 +0xd2
github.com/docker/docker/daemon/logger.(*Copier).copySrc(0xc208ca6800, 0x136bb10, 0x6, 0x7fc3956b3a98, 0xc208c695f0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/logger/copier.go:47 +0x96
created by github.com/docker/docker/daemon/logger.(*Copier).Run
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/logger/copier.go:38 +0x11c

goroutine 3072 [semacquire, 4 minutes]:
sync.(*WaitGroup).Wait(0xc20dde64e0)
    /usr/lib/golang/src/sync/waitgroup.go:132 +0x169
github.com/docker/docker/daemon.func·018(0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/container.go:1032 +0x42
github.com/docker/docker/pkg/promise.func·001()
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/promise/promise.go:8 +0x2f
created by github.com/docker/docker/pkg/promise.Go
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/promise/promise.go:9 +0xfb

goroutine 3073 [syscall, 4 minutes]:
syscall.Syscall6(0x3d, 0xb8c, 0xc208def3ec, 0x0, 0xc208c681b0, 0x0, 0x0, 0x4d143c, 0x4d18c2, 0x1115620)
    /usr/lib/golang/src/syscall/asm_linux_amd64.s:46 +0x5
syscall.wait4(0xb8c, 0xc208def3ec, 0x0, 0xc208c681b0, 0x90, 0x0, 0x0)
    /usr/lib/golang/src/syscall/zsyscall_linux_amd64.go:124 +0x79
syscall.Wait4(0xb8c, 0xc208def434, 0x0, 0xc208c681b0, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/syscall/syscall_linux.go:224 +0x60
os.(*Process).wait(0xc208cbf400, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/os/exec_unix.go:22 +0x103
os.(*Process).Wait(0xc208cbf400, 0xc20dc66118, 0x0, 0x0)
    /usr/lib/golang/src/os/doc.go:45 +0x3a
os/exec.(*Cmd).Wait(0xc208b94640, 0x0, 0x0)
    /usr/lib/golang/src/os/exec/exec.go:364 +0x23c
github.com/opencontainers/runc/libcontainer.(*initProcess).wait(0xc208d12e10, 0x0, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/opencontainers/runc/libcontainer/process_linux.go:237 +0x3d
github.com/opencontainers/runc/libcontainer.Process.Wait(0xc208ce5870, 0x1, 0x1, 0xc208ca66c0, 0x3, 0x4, 0x1cb6c20, 0x0, 0x1cb6c20, 0x0, ...)
    /builddir/build/BUILD/docker-1.9.1/vendor/src/github.com/opencontainers/runc/libcontainer/process.go:60 +0x11d
github.com/opencontainers/runc/libcontainer.Process.Wait·fm(0xc208def9e8, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/execdriver/native/driver.go:179 +0x58
github.com/docker/docker/daemon/execdriver/native.(*Driver).Run(0xc2088a14a0, 0xc208ba6400, 0xc208caf500, 0xc20dc662b8, 0x1, 0x1, 0xc208ce5120, 0x0, 0x0, 0x0, ...)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/execdriver/native/driver.go:185 +0xa2c
github.com/docker/docker/daemon.(*Daemon).run(0xc2080432c0, 0xc208bc3400, 0xc208caf500, 0xc208ce5120, 0x0, 0x0, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/daemon.go:950 +0x246
github.com/docker/docker/daemon.(*containerMonitor).Start(0xc208c1ed20, 0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/monitor.go:145 +0x484
github.com/docker/docker/daemon.*containerMonitor.Start·fm(0x0, 0x0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/container.go:784 +0x39
github.com/docker/docker/pkg/promise.func·001()
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/promise/promise.go:8 +0x2f
created by github.com/docker/docker/pkg/promise.Go
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/promise/promise.go:9 +0xfb

goroutine 3074 [semacquire]:
sync.(*Cond).Wait(0xc20850ec30)
    /usr/lib/golang/src/sync/cond.go:62 +0x9e
io.(*pipe).read(0xc20850ec00, 0xc208ba7400, 0x400, 0x400, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/io/pipe.go:52 +0x303
io.(*PipeReader).Read(0xc20dc66290, 0xc208ba7400, 0x400, 0x400, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/io/pipe.go:134 +0x5b
github.com/docker/docker/pkg/ioutils.(*bufReader).drain(0xc208c695f0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/ioutils/readers.go:90 +0x74
created by github.com/docker/docker/pkg/ioutils.NewBufReader
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/ioutils/readers.go:69 +0x351

goroutine 3064 [IO wait, 4 minutes]:
net.(*pollDesc).Wait(0xc20dddb8e0, 0x72, 0x0, 0x0)
    /usr/lib/golang/src/net/fd_poll_runtime.go:84 +0x47
net.(*pollDesc).WaitRead(0xc20dddb8e0, 0x0, 0x0)
    /usr/lib/golang/src/net/fd_poll_runtime.go:89 +0x43
net.(*netFD).Read(0xc20dddb880, 0xc208821000, 0x1000, 0x1000, 0x0, 0x7fc398b4c580, 0xc208ce4650)
    /usr/lib/golang/src/net/fd_unix.go:242 +0x40f
net.(*conn).Read(0xc20dc66008, 0xc208821000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/net/net.go:121 +0xdc
net/http.(*liveSwitchReader).Read(0xc20ddca228, 0xc208821000, 0x1000, 0x1000, 0x2, 0x0, 0x0)
    /usr/lib/golang/src/net/http/server.go:214 +0xab
io.(*LimitedReader).Read(0xc20dde71e0, 0xc208821000, 0x1000, 0x1000, 0x2, 0x0, 0x0)
    /usr/lib/golang/src/io/io.go:408 +0xce
bufio.(*Reader).fill(0xc20ddd9320)
    /usr/lib/golang/src/bufio/bufio.go:97 +0x1ce
bufio.(*Reader).ReadSlice(0xc20ddd9320, 0xc208debb0a, 0x0, 0x0, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/bufio/bufio.go:295 +0x257
bufio.(*Reader).ReadLine(0xc20ddd9320, 0x0, 0x0, 0x0, 0xc208bc6c00, 0x0, 0x0)
    /usr/lib/golang/src/bufio/bufio.go:324 +0x62
net/textproto.(*Reader).readLineSlice(0xc208cae0f0, 0x0, 0x0, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/net/textproto/reader.go:55 +0x9e
net/textproto.(*Reader).ReadLine(0xc208cae0f0, 0x0, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/net/textproto/reader.go:36 +0x4f
net/http.ReadRequest(0xc20ddd9320, 0xc208ce6ea0, 0x0, 0x0)
    /usr/lib/golang/src/net/http/request.go:598 +0xcb
net/http.(*conn).readRequest(0xc20ddca1e0, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/net/http/server.go:586 +0x26f
net/http.(*conn).serve(0xc20ddca1e0)
    /usr/lib/golang/src/net/http/server.go:1162 +0x69e
created by net/http.(*Server).Serve
    /usr/lib/golang/src/net/http/server.go:1751 +0x35e

goroutine 3068 [semacquire, 4 minutes]:
sync.(*Cond).Wait(0xc20850e270)
    /usr/lib/golang/src/sync/cond.go:62 +0x9e
io.(*pipe).read(0xc20850e240, 0xc208b8a400, 0x400, 0x400, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/io/pipe.go:52 +0x303
io.(*PipeReader).Read(0xc20dc660d8, 0xc208b8a400, 0x400, 0x400, 0xc20b91d080, 0x0, 0x0)
    /usr/lib/golang/src/io/pipe.go:134 +0x5b
github.com/docker/docker/pkg/ioutils.(*bufReader).drain(0xc208c9a5a0)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/ioutils/readers.go:90 +0x74
created by github.com/docker/docker/pkg/ioutils.NewBufReader
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/ioutils/readers.go:69 +0x351

goroutine 3070 [IO wait]:
net.(*pollDesc).Wait(0xc20dddac30, 0x77, 0x0, 0x0)
    /usr/lib/golang/src/net/fd_poll_runtime.go:84 +0x47
net.(*pollDesc).WaitWrite(0xc20dddac30, 0x0, 0x0)
    /usr/lib/golang/src/net/fd_poll_runtime.go:93 +0x43
net.(*netFD).Write(0xc20dddabd0, 0xc208936000, 0x8000, 0x8000, 0x0, 0x7fc398b4c580, 0xc208ce4060)
    /usr/lib/golang/src/net/fd_unix.go:335 +0x5ee
net.(*conn).Write(0xc20dc66000, 0xc208936000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
    /usr/lib/golang/src/net/net.go:129 +0xdc
io.Copy(0x7fc3956af350, 0xc20dc66000, 0x7fc3956b3a98, 0xc208c9a510, 0x27d59cb, 0x0, 0x0)
    /usr/lib/golang/src/io/io.go:364 +0x278
github.com/docker/docker/daemon.func·017(0x136bb10, 0x6, 0x7fc3956af350, 0xc20dc66000, 0x7fc3956b3a68, 0xc208c9a510)
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/container.go:1018 +0x245
created by github.com/docker/docker/daemon.attach
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/daemon/container.go:1028 +0x545

goroutine 83 [chan receive, 7 minutes]:
github.com/docker/docker/pkg/signal.func·002()
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/signal/trap.go:29 +0x8f
created by github.com/docker/docker/pkg/signal.Trap
    /builddir/build/BUILD/docker-1.9.1/_build/src/github.com/docker/docker/pkg/signal/trap.go:55 +0x250
\nWed Dec 23 08:41:50 UTC 2015\n
nohup: ignoring input

Also I noticed that Docker daemon not failing if you don't try to run any docker commands. For example:

I'm starting the same C endless loop, and if I try to kill this container in 10 seconds Docker daemon will fail. If I don't it will be still running.

@unclejack

This comment has been minimized.

Show comment
Hide comment
@unclejack

unclejack Dec 23, 2015

Contributor

This issue has been fixed on master after the release of Docker 1.9. #17877 has fixed the issue and the fix will be in the next Docker version.

Please try to build Docker from the master branch and try to reproduce the problem.

edit: Please take a look at #18057 as well because it's a related issue.

Contributor

unclejack commented Dec 23, 2015

This issue has been fixed on master after the release of Docker 1.9. #17877 has fixed the issue and the fix will be in the next Docker version.

Please try to build Docker from the master branch and try to reproduce the problem.

edit: Please take a look at #18057 as well because it's a related issue.

@aknuds1

This comment has been minimized.

Show comment
Hide comment
@aknuds1

aknuds1 Jan 27, 2016

Don't know if this is related, but I am suffering from humongous swap usage by Docker, ~780 MB out of 2 GB total currently. Would love to be able to resolve this.

$ docker --debug info
Containers: 44
Images: 189
Server Version: 1.9.1-cs2
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 277
 Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.19.0-22-generic
Operating System: Ubuntu 15.04
CPUs: 1
Total Memory: 993.1 MiB
Name: muzhack.com2
ID: IC6S:RE62:77W2:424Q:7OKZ:JDT4:RETI:WQ4A:D23X:B7O5:EWVX:XAQY
Username: aknudsen
Registry: https://index.docker.io/v1/

aknuds1 commented Jan 27, 2016

Don't know if this is related, but I am suffering from humongous swap usage by Docker, ~780 MB out of 2 GB total currently. Would love to be able to resolve this.

$ docker --debug info
Containers: 44
Images: 189
Server Version: 1.9.1-cs2
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 277
 Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.19.0-22-generic
Operating System: Ubuntu 15.04
CPUs: 1
Total Memory: 993.1 MiB
Name: muzhack.com2
ID: IC6S:RE62:77W2:424Q:7OKZ:JDT4:RETI:WQ4A:D23X:B7O5:EWVX:XAQY
Username: aknudsen
Registry: https://index.docker.io/v1/
@sundarseswaran

This comment has been minimized.

Show comment
Hide comment
@sundarseswaran

sundarseswaran Feb 17, 2016

Ran into a similar issue, but while running a container though.

> docker-compose up -d
Recreating test_apiserver_1...
Cannot start container 154e5f72a9b61c1d83a19849d7b3351ae3dff4f4c7ffc646114e88167b1b1a56: [8] System error: fork/exec /usr/bin/docker: cannot allocate memory
> docker --debug info
Containers: 12
Images: 172
Storage Driver: devicemapper
 Pool Name: docker-202:1-275034-pool
 Pool Blocksize: 65.54 kB
 Backing Filesystem: extfs
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 8.095 GB
 Data Space Total: 107.4 GB
 Data Space Available: 63.69 GB
 Metadata Space Used: 9.552 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.138 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.77 (2012-10-15)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-48-generic
Operating System: Ubuntu 14.04.2 LTS
CPUs: 4
Total Memory: 29.96 GiB
Name: straxco
ID: MHLQ:MNHU:276A:B666:6O2H:UGKC:JMBK:LSR4:JWPW:HWBR:LLWJ:3Q3R
Debug mode (server): false
Debug mode (client): true
File Descriptors: 66
Goroutines: 102
System Time: 2016-02-17T18:15:15.708693282-05:00
EventsListeners: 0
Init SHA1: 32127ad3d10373671bcbdc6706733e7089d9d142
Init Path: /usr/lib/docker.io/dockerinit
Docker Root Dir: /var/lib/docker
WARNING: No swap limit support

ps; the same container was running already. Got the below layer while building an image (Apply Layer) and then was unable to run the container. Looks like it take a while to free up some memory

ApplyLayer fork/exec /usr/bin/docker: cannot allocate memory stdout: stderr:

Ran into a similar issue, but while running a container though.

> docker-compose up -d
Recreating test_apiserver_1...
Cannot start container 154e5f72a9b61c1d83a19849d7b3351ae3dff4f4c7ffc646114e88167b1b1a56: [8] System error: fork/exec /usr/bin/docker: cannot allocate memory
> docker --debug info
Containers: 12
Images: 172
Storage Driver: devicemapper
 Pool Name: docker-202:1-275034-pool
 Pool Blocksize: 65.54 kB
 Backing Filesystem: extfs
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 8.095 GB
 Data Space Total: 107.4 GB
 Data Space Available: 63.69 GB
 Metadata Space Used: 9.552 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.138 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.77 (2012-10-15)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-48-generic
Operating System: Ubuntu 14.04.2 LTS
CPUs: 4
Total Memory: 29.96 GiB
Name: straxco
ID: MHLQ:MNHU:276A:B666:6O2H:UGKC:JMBK:LSR4:JWPW:HWBR:LLWJ:3Q3R
Debug mode (server): false
Debug mode (client): true
File Descriptors: 66
Goroutines: 102
System Time: 2016-02-17T18:15:15.708693282-05:00
EventsListeners: 0
Init SHA1: 32127ad3d10373671bcbdc6706733e7089d9d142
Init Path: /usr/lib/docker.io/dockerinit
Docker Root Dir: /var/lib/docker
WARNING: No swap limit support

ps; the same container was running already. Got the below layer while building an image (Apply Layer) and then was unable to run the container. Looks like it take a while to free up some memory

ApplyLayer fork/exec /usr/bin/docker: cannot allocate memory stdout: stderr:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment