Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

log without '\n' cause the docker-daemon's memory overload #18057

Closed
cloudfly opened this issue Nov 18, 2015 · 21 comments
Closed

log without '\n' cause the docker-daemon's memory overload #18057

cloudfly opened this issue Nov 18, 2015 · 21 comments

Comments

@cloudfly
Copy link

Description of problem:

code here: https://github.com/docker/docker/blob/master/daemon/logger/copier.go#L47

if a container always print logs without \n, then docker will buffer the logs until it find the \n

this makes docker-daemon use up all the memory.

I think the condition should be: find a \n or the buffer reached 64k(or smaller).

How reproducible:

docker run -d --log-driver=syslog centos /bin/bash -c 'while true;do echo -n hello; done;'

Steps to Reproduce:

  1. run the command above
  2. top command to check the memory usage of docker, (wait a minute)
  3. docker stop xxxx stop the container, memory usage will increased very high fastly during this step.
  4. the docker-daemon crashed(if you waited enough minutes at step 2)

the debug log is:

Nov 18 07:20:13 node1 docker: time="2015-11-18T07:20:13.524208676Z" level=info msg="POST /v1.21/containers/elegant_morse/stop?t=10"
Nov 18 07:20:13 node1 docker: time="2015-11-18T07:20:13.524588079Z" level=debug msg="Sending 15 to 6e1b83ce397d5c35171c5e30a89d08376b7bf6a764a3661eab69aa0ae630e354"
Nov 18 07:20:23 node1 docker: time="2015-11-18T07:20:23.526866115Z" level=info msg="Container 6e1b83ce397d5c35171c5e30a89d08376b7bf6a764a3661eab69aa0ae630e354 failed to exit within 10 seconds of SIGTERM - using the force"
Nov 18 07:20:23 node1 docker: time="2015-11-18T07:20:23.526898079Z" level=debug msg="Sending 9 to 6e1b83ce397d5c35171c5e30a89d08376b7bf6a764a3661eab69aa0ae630e354"
Nov 18 07:21:00 node1 docker: fatal error: runtime: out of memory
Nov 18 07:21:01 node1 docker: runtime stack:
Nov 18 07:21:01 node1 docker: runtime.throw(0x1cb4af7)
Nov 18 07:21:01 node1 docker: /usr/local/go/src/runtime/panic.go:491 +0xad fp=0x7fff632be650 sp=0x7fff632be620
Nov 18 07:21:01 node1 docker: runtime.SysMap(0xc2a38b0000, 0x12290000, 0x4dcb00, 0x1ce2b18)
Nov 18 07:21:01 node1 docker: /usr/local/go/src/runtime/mem_linux.c:149 +0x98 fp=0x7fff632be688 sp=0x7fff632be650
Nov 18 07:21:01 node1 docker: runtime.MHeap_SysAlloc(0x1ce8200, 0x12290000, 0x4cbf42)
Nov 18 07:21:01 node1 docker: /usr/local/go/src/runtime/malloc.c:284 +0x124 fp=0x7fff632be6e0 sp=0x7fff632be688
Nov 18 07:21:01 node1 docker: MHeap_Grow(0x1ce8200, 0x9148, 0x0)
Nov 18 07:21:01 node1 docker: /usr/local/go/src/runtime/mheap.c:398 +0x58 fp=0x7fff632be720 sp=0x7fff632be6e0
Nov 18 07:21:01 node1 docker: MHeap_AllocSpanLocked(0x1ce8200, 0x9141, 0x4cfa20)
Nov 18 07:21:01 node1 docker: /usr/local/go/src/runtime/mheap.c:298 +0x365 fp=0x7fff632be760 sp=0x7fff632be720
Nov 18 07:21:01 node1 docker: mheap_alloc(0x1ce8200, 0x9141, 0x100000000, 0xc208d42810)
Nov 18 07:21:01 node1 docker: /usr/local/go/src/runtime/mheap.c:190 +0x121 fp=0x7fff632be788 sp=0x7fff632be760
Nov 18 07:21:01 node1 docker: runtime.MHeap_Alloc(0x1ce8200, 0x9141, 0x100000000, 0x108012001)
Nov 18 07:21:01 node1 docker: /usr/local/go/src/runtime/mheap.c:240 +0x66 fp=0x7fff632be7c0 sp=0x7fff632be788
Nov 18 07:21:01 node1 docker: runtime.largeAlloc_m()
Nov 18 07:21:01 node1 docker: /usr/local/go/src/runtime/malloc.c:388 +0xce fp=0x7fff632be810 sp=0x7fff632be7c0
Nov 18 07:21:01 node1 docker: runtime.onM(0x1cd1eb0)
Nov 18 07:21:01 node1 docker: /usr/local/go/src/runtime/asm_amd64.s:257 +0x68 fp=0x7fff632be818 sp=0x7fff632be810
Nov 18 07:21:01 node1 docker: runtime.mstart()
Nov 18 07:21:01 node1 docker: /usr/local/go/src/runtime/proc.c:818 fp=0x7fff632be820 sp=0x7fff632be818
Nov 18 07:21:01 node1 docker: goroutine 213 [running]:
Nov 18 07:21:01 node1 docker: runtime.switchtoM()
Nov 18 07:21:01 node1 docker: /usr/local/go/src/runtime/asm_amd64.s:198 fp=0xc21b4cbb28 sp=0xc21b4cbb20
Nov 18 07:21:01 node1 docker: runtime.mallocgc(0x122801f7, 0x0, 0x3, 0x1)
Nov 18 07:21:01 node1 docker: /usr/local/go/src/runtime/malloc.go:199 +0x9f3 fp=0xc21b4cbbd8 sp=0xc21b4cbb28
Nov 18 07:21:01 node1 docker: runtime.rawstring(0x122801f7, 0x0, 0x0, 0x0, 0x0, 0x0)
Nov 18 07:21:01 node1 docker: /usr/local/go/src/runtime/string.go:195 +0x93 fp=0xc21b4cbc08 sp=0xc21b4cbbd8
Nov 18 07:21:01 node1 docker: runtime.slicebytetostring(0xc29151e000, 0x122801f7, 0x12282000, 0x0, 0x0)
Nov 18 07:21:01 node1 docker: /usr/local/go/src/runtime/string.go:69 +0x4b fp=0xc21b4cbc98 sp=0xc21b4cbc08
Nov 18 07:21:01 node1 docker: fmt.Sprint(0xc21b4cbd78, 0x1, 0x1, 0x0, 0x0)
Nov 18 07:21:01 node1 docker: /usr/local/go/src/fmt/print.go:240 +0x8c fp=0xc21b4cbce0 sp=0xc21b4cbc98
Nov 18 07:21:01 node1 docker: github.com/Sirupsen/logrus.(*Entry).Error(0xc208a80300, 0xc21b4cbd78, 0x1, 0x1)
Nov 18 07:21:01 node1 docker: /root/rpmbuild/BUILD/docker-engine/vendor/src/github.com/Sirupsen/logrus/entry.go:135 +0x4e fp=0xc21b4cbd20 sp=0xc21b4cbce0
Nov 18 07:21:01 node1 docker: github.com/Sirupsen/logrus.(*Entry).Errorf(0xc208a80300, 0x14d2a90, 0x26, 0xc21b4cbf80, 0x3, 0x3)
Nov 18 07:21:01 node1 docker: /root/rpmbuild/BUILD/docker-engine/vendor/src/github.com/Sirupsen/logrus/entry.go:183 +0x144 fp=0xc21b4cbda8 sp=0xc21b4cbd20
Nov 18 07:21:01 node1 docker: github.com/Sirupsen/logrus.(*Logger).Errorf(0xc208036000, 0x14d2a90, 0x26, 0xc21b4cbf80, 0x3, 0x3)
Nov 18 07:21:01 node1 docker: /root/rpmbuild/BUILD/docker-engine/vendor/src/github.com/Sirupsen/logrus/logger.go:97 +0x104 fp=0xc21b4cbdf8 sp=0xc21b4cbda8
Nov 18 07:21:01 node1 docker: github.com/Sirupsen/logrus.Errorf(0x14d2a90, 0x26, 0xc21b4cbf80, 0x3, 0x3)
Nov 18 07:21:01 node1 docker: /root/rpmbuild/BUILD/docker-engine/vendor/src/github.com/Sirupsen/logrus/exported.go:137 +0x5c fp=0xc21b4cbe30 sp=0xc21b4cbdf8
Nov 18 07:21:01 node1 docker: github.com/docker/docker/daemon/logger.(*Copier).copySrc(0xc208b73dc0, 0x13864f0, 0x6, 0x7f332c1e0110, 0xc2089b0a20)
Nov 18 07:21:01 node1 docker: /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/daemon/logger/copier.go:54 +0x66d fp=0xc21b4cbfb8 sp=0xc21b4cbe30
Nov 18 07:21:01 node1 docker: runtime.goexit()
Nov 18 07:21:01 node1 docker: /usr/local/go/src/runtime/asm_amd64.s:2232 +0x1 fp=0xc21b4cbfc0 sp=0xc21b4cbfb8
Nov 18 07:21:01 node1 docker: created by github.com/docker/docker/daemon/logger.(*Copier).Run
Nov 18 07:21:01 node1 docker: /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/daemon/logger/copier.go:38 +0x11c
Nov 18 07:21:01 node1 docker: goroutine 1 [chan receive, 5 minutes]:
Nov 18 07:21:01 node1 docker: runtime.gopark(0x4d2060, 0xc20800cbf8, 0x13ba0b0, 0xc)
Nov 18 07:21:01 node1 docker: /usr/local/go/src/runtime/proc.go:130 +0x105 fp=0xc2088fd6f0 sp=0xc2088fd6c0
Nov 18 07:21:01 node1 docker: runtime.goparkunlock(0xc20800cbf8, 0x13ba0b0, 0xc)
Nov 18 07:21:01 node1 docker: /usr/local/go/src/runtime/proc.go:136 +0x48 fp=0xc2088fd718 sp=0xc2088fd6f0
Nov 18 07:21:01 node1 docker: runtime.chanrecv(0xf445c0, 0xc20800cba0, 0xc2088fdab0, 0xc20800c901, 0xc208a70000)
Nov 18 07:21:01 node1 docker: /usr/local/go/src/runtime/chan.go:410 +0x452 fp=0xc2088fd7b8 sp=0xc2088fd718
Nov 18 07:21:01 node1 docker: runtime.chanrecv1(0xf445c0, 0xc20800cba0, 0xc2088fdab0)
Nov 18 07:21:01 node1 docker: /usr/local/go/src/runtime/chan.go:311 +0x2b fp=0xc2088fd7e8 sp=0xc2088fd7b8
Nov 18 07:21:01 node1 docker: main.(*DaemonCli).CmdDaemon(0xc2084f9260, 0xc20800a020, 0xc, 0xc, 0x0, 0x0)
Nov 18 07:21:01 node1 docker: /root/rpmbuild/BUILD/docker-engine/docker/daemon.go:308 +0x20a7 fp=0xc2088fdbf8 sp=0xc2088fd7e8
Nov 18 07:21:01 node1 docker: runtime.call64(0x1256f18, 0xc208508000, 0x2000000030)
Nov 18 07:21:01 node1 docker: /usr/local/go/src/runtime/asm_amd64.s:403 +0x45 fp=0xc2088fdc40 sp=0xc2088fdbf8
Nov 18 07:21:01 node1 docker: reflect.callMethod(0xc2084dbf80, 0xc2088fdce8)
Nov 18 07:21:01 node1 docker: /usr/local/go/src/reflect/value.go:605 +0x179 fp=0xc2088fdcd0 sp=0xc2088fdc40
Nov 18 07:21:01 node1 docker: reflect.methodValueCall(0xc20800a020, 0xc, 0xc, 0x1, 0xc2084dbf80, 0x0, 0x0, 0xc2084dbf80, 0x0, 0x4e297f, ...)
Nov 18 07:21:01 node1 docker: /usr/local/go/src/reflect/asm_amd64.s:29 +0x36 fp=0xc2088fdce8 sp=0xc2088fdcd0
Nov 18 07:21:01 node1 docker: github.com/docker/docker/cli.(*Cli).Run(0xc2084dbec0, 0xc20800a010, 0xd, 0xd, 0x0, 0x0)
Nov 18 07:21:01 node1 docker: /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/cli/cli.go:89 +0x38e fp=0xc2088fddd0 sp=0xc2088fdce8
Nov 18 07:21:01 node1 docker: main.main()
....... too long

Actual Results: None
Expected Results: None
Additional info: None

$ docker version
Client:
 Version:      1.9.0
 API version:  1.21
 Go version:   go1.4.2
 Git commit:   76d6bc9
 Built:        Tue Nov  3 18:00:05 UTC 2015
 OS/Arch:      linux/amd64

Server:
 Version:      1.9.0
 API version:  1.21
 Go version:   go1.4.2
 Git commit:   76d6bc9
 Built:        Tue Nov  3 18:00:05 UTC 2015
 OS/Arch:      linux/amd64
$ docker info
Containers: 1
Images: 5
Server Version: 1.9.0
Storage Driver: devicemapper
 Pool Name: docker-253:1-137634752-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 107.4 GB
 Backing Filesystem: xfs
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 2.025 GB
 Data Space Total: 107.4 GB
 Data Space Available: 32.1 GB
 Metadata Space Used: 1.741 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.146 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Data loop file: /data/docker/devicemapper/devicemapper/data
 Metadata loop file: /data/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.93-RHEL7 (2015-01-28)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.10.0-123.el7.x86_64
Operating System: CentOS Linux 7 (Core)
CPUs: 1
Total Memory: 1.798 GiB
Name: node1
ID: VIPU:YZB2:5AHS:JM36:XICF:Z4UR:UNNG:672F:GKRG:ZAZY:RIXY:PW34
Debug mode (server): true
 File Descriptors: 18
 Goroutines: 27
 System Time: 2015-11-18T07:23:41.09190251Z
 EventsListeners: 0
 Init SHA1: 3f858d7291df745ff796d46a3d02cc7ee67962b5
 Init Path: /usr/libexec/docker/dockerinit
 Docker Root Dir: /data/docker
$ uname -a
Linux node1 3.10.0-123.el7.x86_64 #1 SMP Mon Jun 30 12:09:22 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
@crosbymichael
Copy link
Contributor

ping @LK4D4

@LK4D4
Copy link
Contributor

LK4D4 commented Nov 20, 2015

@cloudfly What you think should happen if buffer reached?

@MHBauer
Copy link
Contributor

MHBauer commented Nov 20, 2015

Interesting problem, can't really log part of a message, and should not be logging huge messages, but should degrade gracefully when such occurs. To even do a bail-out would be kind of tricky with several lines of lower-level code and possibly a loop.

@cpuguy83
Copy link
Member

Or can run the logger in a separate process in the container's cgroup so that at least the process in the container can't destroy the host when -m is set (or give it it's own memory constraint in a separate cgroup).

@cloudfly
Copy link
Author

@LK4D4 Flush data into log driver, when we get \n or buffer reached.

the Logger may need to provide a function such as RawLog([]byte), and call this instead of Log(*Message) when we find buffer is full.

@crosbymichael
Copy link
Contributor

i think @cloudfly 's suggestion on flushing if buffer is reached makes sense to me

@LK4D4
Copy link
Contributor

LK4D4 commented Nov 23, 2015

@cloudfly @crosbymichael As separate line? Do you think that log-driver design about separate messages wrong overall?

@crosbymichael
Copy link
Contributor

bla

@cloudfly
Copy link
Author

oh...yes , a little complicated.

another unreliable idea.

change the Message datastructure to

type Message struct {
    ContainerID string
    Line        io.Reader //  []byte => io.Reader
    Source      string
    Timestamp   time.Time
}

docker create a io.ReadWriteCloser for each line, and close it when \n finded.

this solution leave the problem to log-driver. log-driver should not buffer the data to wait for \n.

@taraspos
Copy link

+1
I'm also experiencing this issue and its pretty critical.

# docker info
Containers: 11
Images: 118
Server Version: 1.9.1
Storage Driver: devicemapper
 Pool Name: docker-202:1-263804-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 10.74 GB
 Backing Filesystem: 
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 8.53 GB
 Data Space Total: 107.4 GB
 Data Space Available: 21.48 GB
 Metadata Space Used: 9.122 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.138 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.93-RHEL7 (2015-01-28)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 4.1.10-17.31.amzn1.x86_64
Operating System: Amazon Linux AMI 2015.09
CPUs: 2
Total Memory: 3.862 GiB
Name: ####
ID: 7UQX:KJKK:NQPV:A7CO:HZ2G:BOMI:XQ2D:UKXF:JHDI:K7EB:R3OW:L25N

and

# docker version
Client:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.2
 Git commit:   a34a1d5/1.9.1
 Built:        
 OS/Arch:      linux/amd64

Server:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.2
 Git commit:   a34a1d5/1.9.1
 Built:        
 OS/Arch:      linux/amd64

Is there any progress on fixing it?

@unclejack
Copy link
Contributor

How big are those one line messages most of the time? At most 64 KB? Maybe we could buffer up to a certain amount and dump that to something such as LogBytes([]byte) in the logging drivers (as was suggested above). Once we go into this logging mode, we keep logging like this until we encounter the end of the line. When we're done with that, we look for the end of the line and log a regular EOL terminated message.

I think this might be one of the last memory usage issues left to fix if it's not the very last one.

What do you think?

@AkihiroSuda
Copy link
Member

#13564 (for #13333) seems caused this issue.

Basically +1 for LogBytes() as a workaround, but I think @cpuguy83 's cgroup-based approach (#18057 (comment)) is much more ideal.

Not only the memory usage but also CPU is heavily affected by massive logs: #21181

The cgroup-based approach should be also able to handle the CPU load issue.

Plus as a side-effect, it would be easy for third parties to implement their own logging plugins.
#18604
#21271

AkihiroSuda added a commit to AkihiroSuda/issues-docker that referenced this issue Apr 12, 2016
@jayunit100
Copy link

jayunit100 commented May 13, 2016

infinite loop of printing causes this, i think wether or not there are \n characters?

  • docker run -d gcr.io/google_containers/busybox:1.24 '/bin/sh' '-c' 'while true ; do echo "aasd\n" causes this problem for me ... eventually the daemon overloads memory

...... @cloudfly - I don't think "\n" is really a valid workaround ? ......

@timothysc
Copy link

We see this issue as well.
/cc @rhatdan @jeremyeder

@rhatdan
Copy link
Contributor

rhatdan commented May 14, 2016

@nalind PTAL

@nalind
Copy link
Contributor

nalind commented May 25, 2016

Two things seem to be happening - in the logger copySrc() goroutine, the bufio ReadBytes() call just keeps waiting for a newline, buffering up whatever it's already read, until it finds one. If one doesn't come, we consume lots of memory. I think we could either switch to using ReadLine(), which lets us read and know if we only got part of a line, or do that logic ourselves since ReadLine() doesn't give us a "there wasn't a newline" indication correctly at the end of data.
Either way, we'd want to track that information, since adding a newline (or not) at the right places is going to be necessary to keep docker logs output looking the same.

We also seem to be growing our memory usage when we read back those logs, in different places in the json-file (in decodeLogLine()) and journald (in GoBytes()) reader implementations, but after a point the garbage collector keeps the usage from getting higher.

How you'd limit the disk usage of logs varies depending on the log driver. The journald daemon should be heeding its settings from journald.conf(5), and the json-file logger offers a "max-size" option to limit the size of a container's log file, with its "max-file" option used to turn on log rotation. The json-file settings are per-container, however, so you have to manage them a little more carefully. Adding a global setting might make some sense there.

@thaJeztah
Copy link
Member

The json-file settings are per-container, however, so you have to manage them a little more carefully. Adding a global setting might make some sense there.

@nalind there is a global (daemon level) --log-opt, so you should be able to set that for all containers, or did you mean something else?

Also (slightly related), perhaps you have thoughts on #22450. We're unsure what the "right" approach is there

@nalind
Copy link
Contributor

nalind commented May 25, 2016

@thaJeztah I was thinking more along the lines of having something that would limit the cumulative usage of json-file logs in all containers, as an alternative to having to do the math externally. The design for that could get complicated, though.

@thaJeztah
Copy link
Member

@nalind ah, right; I think overall the JSON driver is not really designed for that. Overall not the recommended driver for anything in production / high volume logging

@vgs24
Copy link

vgs24 commented Jun 21, 2016

Not able to remove containers, issue seems to be same as 20600 [not completely].
Since 20600 is duped to this, i'm putting the details here.

Docker version -
docker version
Client:
Version: 1.9.1
API version: 1.21
Go version: go1.4.2 gccgo (GCC) 5.3.1 20151231
Git commit: 18bfacb-dirty
Built: Thu Dec 31 17:38:41 UTC 2015
OS/Arch: linux/ppc64le

Server:
Version: 1.9.1
API version: 1.21
Go version: go1.4.2 gccgo (GCC) 5.3.1 20151231
Git commit: 18bfacb-dirty
Built: Thu Dec 31 17:38:41 UTC 2015
OS/Arch: linux/ppc64le

docker info
Containers: 1
Images: 8
Server Version: 1.9.1
Storage Driver: btrfs
Build Version: Btrfs v3.16.2
Library Version: 101
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.10.0-327.el7.ppc64le
Operating System: Red Hat Enterprise Linux Server 7.2 (Maipo)
CPUs: 16
Total Memory: 2.836 GiB
Name: dockerlp2.isst.aus.stglabs.ibm.com
ID: ZBRS:ZXV4:UVUD:5NJ7:YKPO:IURD:SZR5:NCRE:EXHA:F7CU:ABQN:XS2Z
Username: vgs24
Registry: https://index.docker.io/v1/

Logs -
Jun 21 00:35:40 dockerlp2 docker: time="2016-06-21T00:35:40.810751000-05:00" level=info msg="GET /v1.21/containers/json?all=1"
Jun 21 00:35:50 dockerlp2 docker: time="2016-06-21T00:35:50.380533000-05:00" level=info msg="DELETE /v1.21/containers/vgs1"
Jun 21 00:35:51 dockerlp2 docker: time="2016-06-21T00:35:51.928525000-05:00" level=info msg="DELETE /v1.21/containers/vgs_rhel72le"
Jun 21 00:35:52 dockerlp2 docker: 2016/06/21 00:35:52 http: panic serving @: runtime error: invalid memory address or nil pointer dereference
Jun 21 00:35:52 dockerlp2 docker: goroutine 870 [running]:
Jun 21 00:36:00 dockerlp2 docker: time="2016-06-21T00:36:00.358035000-05:00" level=info msg="GET /v1.21/containers/json?all=1"
Jun 21 00:36:09 dockerlp2 docker: time="2016-06-21T00:36:09.028095000-05:00" level=info msg="DELETE /v1.21/containers/64af6442b25c"
Jun 21 00:36:09 dockerlp2 docker: 2016/06/21 00:36:09 http: panic serving @: runtime error: invalid memory address or nil pointer dereference
Jun 21 00:36:09 dockerlp2 docker: goroutine 872 [running]:
Jun 21 00:36:32 dockerlp2 docker: time="2016-06-21T00:36:32.308310000-05:00" level=info msg="GET /v1.21/containers/json?all=1"
Jun 21 00:36:38 dockerlp2 docker: time="2016-06-21T00:36:38.929441000-05:00" level=info msg="POST /v1.21/containers/64af6442b25c/kill?signal=KILL"
Jun 21 00:36:38 dockerlp2 docker: time="2016-06-21T00:36:38.929760000-05:00" level=error msg="Handler for POST /v1.21/containers/64af6442b25c/kill returned error: Cannot kill container 64af6442b25c: notrunning: Container 64af6442b25c6759b24f628ae993967a57bfecd69dd7151730cbeb7681cf3826 is not running"
Jun 21 00:36:38 dockerlp2 docker: time="2016-06-21T00:36:38.929815000-05:00" level=error msg="HTTP Error" err="Cannot kill container 64af6442b25c: notrunning: Container 64af6442b25c6759b24f628ae993967a57bfecd69dd7151730cbeb7681cf3826 is not running" statusCode=500
Jun 21 00:37:00 dockerlp2 docker: time="2016-06-21T00:37:00.762174000-05:00" level=info msg="DELETE /v1.21/containers/64af6442b25c?force=1"
Jun 21 00:37:01 dockerlp2 docker: 2016/06/21 00:37:01 http: panic serving @: runtime error: invalid memory address or nil pointer dereference
Jun 21 00:37:01 dockerlp2 docker: goroutine 875 [running]:
Jun 21 00:37:50 dockerlp2 docker: time="2016-06-21T00:37:50.079511000-05:00" level=info msg="GET /v1.21/containers/64af6442b25c/json"
Jun 21 00:37:50 dockerlp2 docker: time="2016-06-21T00:37:50.081693000-05:00" level=info msg="GET /v1.21/containers/64af6442b25c/logs?stderr=1&stdout=1&tail=all"

docker rm 64af6442b25c
An error occurred trying to connect: EOF
Error: failed to remove containers: [64af6442b25c]

docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
64af6442b25c 9e0e "/bin/bash" 18 hours ago Dead

@thaJeztah
Copy link
Member

I think we can close this, now that #22982 was merged; this will be in docker 1.13

@thaJeztah thaJeztah added this to the 1.13.0 milestone Aug 13, 2016
AkihiroSuda added a commit to AkihiroSuda/issues-docker that referenced this issue Aug 15, 2016
liusdu pushed a commit to liusdu/moby that referenced this issue Oct 30, 2017
This change updates how we handle long lines of output from the
container.  The previous logic used a bufio reader to read entire lines
of output from the container through an intermediate BytesPipe, and that
allowed the container to cause dockerd to consume an unconstrained
amount of memory as it attempted to collect a whole line of output, by
outputting data without newlines.

To avoid that, we replace the bufio reader with our own buffering scheme
that handles log lines up to 16k in length, breaking up anything longer
than that into multiple chunks.  If we can dispense with noting this
detail properly at the end of output, we can switch from using
ReadBytes() to using ReadLine() instead.  We add a field ("Partial") to
the log message structure to flag when we pass data to the log driver
that did not end with a newline.

The Line member of Message structures that we pass to log drivers is now
a slice into data which can be overwritten between calls to the log
driver's Log() method, so drivers which batch up Messages before
processing them need to take additional care: we add a function
(logger.CopyMessage()) that can be used to create a deep copy of a
Message structure, and modify the awslogs driver to use it.

We update the jsonfile log driver to append a "\n" to the data that it
logs to disk only when the Partial flag is false (it previously did so
unconditionally), to make its "logs" output correctly reproduce the data
as we received it.

Likewise, we modify the journald log driver to add a data field with
value CONTAINER_PARTIAL_MESSAGE=true to entries when the Partial flag is
true, and update its "logs" reader to refrain from appending a "\n" to
the data that it retrieves if it does not see this field/value pair (it
also previously did this unconditionally).

fix http://code.huawei.com/docker/docker/issues/324
upsteam issue: moby#18057
fix DTS2017062307255
cherry-pick from: moby#22982
conflicts:
	daemon/logger/copier.go
	daemon/logger/journald/read.go
	daemon/logger/logger.go

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com> (github: nalind)
Signed-off-by: Lei Jitang <leijitang@huawei.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.