New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement optional ring buffer for container logs #28762

Merged
merged 2 commits into from Feb 1, 2017

Conversation

@cpuguy83
Contributor

cpuguy83 commented Nov 23, 2016

This allows the user to set a logging mode to "blocking" (default), or
"non-blocking", which uses the ring buffer as a proxy to the real log
driver.

This allows a container to never be blocked on stdio at the cost of
dropping log messages.

Introduces a new flag to docker run and dockerd, --log-mode, which
takes a value of "blocking", or "non-blocking". I chose not to implement
this as a bool since it is difficult to determine if the mode was set to
false vs just not set... especially difficult when merging the default
daemon config with the container config.

One thing to think about here is logger.Message could be any size, it just depends on how long the log line was... so it could be almost nothing, or it could be several megabytes which could cause problems in just blindly buffering N number of messages in the ring buffer.
We could keep track of this, but this could be tricky and introduces locking into the implementation.

@justincormack

This comment has been minimized.

Show comment
Hide comment
@justincormack

justincormack Nov 23, 2016

Contributor

Seems like one reasonable solution. I wonder if the ring buffer should be a fixed size in bytes though (which can be specified). Then this would carry over simply to a compatible interfaces via a pipe, where you could set the pipe buffer size and discard if no space. There are some issues about message boundaries though, you could use datagrams and allow truncation.

Contributor

justincormack commented Nov 23, 2016

Seems like one reasonable solution. I wonder if the ring buffer should be a fixed size in bytes though (which can be specified). Then this would carry over simply to a compatible interfaces via a pipe, where you could set the pipe buffer size and discard if no space. There are some issues about message boundaries though, you could use datagrams and allow truncation.

@LK4D4

This comment has been minimized.

Show comment
Hide comment
@LK4D4

LK4D4 Nov 23, 2016

Contributor

@cpuguy83 maybe embed it to log-opt?

Contributor

LK4D4 commented Nov 23, 2016

@cpuguy83 maybe embed it to log-opt?

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Nov 23, 2016

Contributor

@cpuguy83 maybe embed it to log-opt?

@LK4D4 The reason I didn't want to do that is log-opt tends to be options for the driver, where as this is changing how docker itself handles logs.

Contributor

cpuguy83 commented Nov 23, 2016

@cpuguy83 maybe embed it to log-opt?

@LK4D4 The reason I didn't want to do that is log-opt tends to be options for the driver, where as this is changing how docker itself handles logs.

Show outdated Hide outdated daemon/logger/ring.go Outdated
@LK4D4

This comment has been minimized.

Show comment
Hide comment
@LK4D4

LK4D4 Nov 23, 2016

Contributor

I agree with @justincormack about specifying size. However, computing size might be challenging. The number of messages might be okay, too for average use.
Design +1

Contributor

LK4D4 commented Nov 23, 2016

I agree with @justincormack about specifying size. However, computing size might be challenging. The number of messages might be okay, too for average use.
Design +1

@AkihiroSuda

This comment has been minimized.

Show comment
Hide comment
@AkihiroSuda

AkihiroSuda Nov 24, 2016

Member

Design SGTM 👍

Member

AkihiroSuda commented Nov 24, 2016

Design SGTM 👍

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Nov 26, 2016

Member

One thing to think about here is logger.Message could be any size, it just depends on how long the log line was...

Perhaps silly comment, but https://github.com/docker/docker/pull/22982/files#diff-0e0dba78caf4a57795625b5ce9943c9eR13 introduced a buffer size, i.e., messages longer than that size are split (even if there's no newline); or is that not relevant here?

Member

thaJeztah commented Nov 26, 2016

One thing to think about here is logger.Message could be any size, it just depends on how long the log line was...

Perhaps silly comment, but https://github.com/docker/docker/pull/22982/files#diff-0e0dba78caf4a57795625b5ce9943c9eR13 introduced a buffer size, i.e., messages longer than that size are split (even if there's no newline); or is that not relevant here?

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Nov 26, 2016

Contributor

That's true, there is a max. But I think people want to control the buffer based on size not number of messages.
So I've pushed a new implementation doing just this.

Contributor

cpuguy83 commented Nov 26, 2016

That's true, there is a max. But I think people want to control the buffer based on size not number of messages.
So I've pushed a new implementation doing just this.

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Nov 26, 2016

Contributor

@justincormack Implementing this in github.com/pkg/iotutils.BytesPipe is fairly trivial I think, but wondering what's better, drop random byte slices or full messages.

Contributor

cpuguy83 commented Nov 26, 2016

@justincormack Implementing this in github.com/pkg/iotutils.BytesPipe is fairly trivial I think, but wondering what's better, drop random byte slices or full messages.

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Nov 26, 2016

Contributor

Although certainly implementing in BytesPipe is significantly faster since we don't need to do any extra allocations, where as each call to Log() as implemented here requires 1 allocation.

I went ahead and added an implementation in BytesPipe just to look at, presented without tests or benchmarks at the moment.

Contributor

cpuguy83 commented Nov 26, 2016

Although certainly implementing in BytesPipe is significantly faster since we don't need to do any extra allocations, where as each call to Log() as implemented here requires 1 allocation.

I went ahead and added an implementation in BytesPipe just to look at, presented without tests or benchmarks at the moment.

@justincormack

This comment has been minimized.

Show comment
Hide comment
@justincormack

justincormack Nov 26, 2016

Contributor
Contributor

justincormack commented Nov 26, 2016

Show outdated Hide outdated daemon/logger/ring.go Outdated
Show outdated Hide outdated runconfig/opts/parse.go Outdated
@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Nov 28, 2016

Contributor

Removed the commit with the bytespipe changes.

Contributor

cpuguy83 commented Nov 28, 2016

Removed the commit with the bytespipe changes.

Show outdated Hide outdated daemon/logger/awslogs/cloudwatchlogs.go Outdated
Show outdated Hide outdated daemon/logger/ring.go Outdated
@LK4D4

This comment has been minimized.

Show comment
Hide comment
@LK4D4

LK4D4 Nov 28, 2016

Contributor

Cond based buffer is kinda cool. Will review later today.

Contributor

LK4D4 commented Nov 28, 2016

Cond based buffer is kinda cool. Will review later today.

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Nov 28, 2016

Contributor

Benchmarks:

BenchmarkRingLoggerThroughputNoReceiver-8           	300000000	       135 ns/op	 229.05 MB/s	     133 B/op	       1 allocs/op
BenchmarkRingLoggerThroughputWithReceiverDelay0-8   	200000000	       212 ns/op	 146.06 MB/s	     113 B/op	       1 allocs/op
BenchmarkRingLoggerThroughputConsumeDelay1-8        	300000000	       130 ns/op	 237.55 MB/s	     133 B/op	       1 allocs/op
BenchmarkRingLoggerThroughputConsumeDelay10-8       	300000000	       132 ns/op	 234.68 MB/s	     133 B/op	       1 allocs/op
BenchmarkRingLoggerThroughputConsumeDelay50-8       	300000000	       126 ns/op	 244.84 MB/s	     133 B/op	       1 allocs/op
BenchmarkRingLoggerThroughputConsumeDelay100-8      	300000000	       131 ns/op	 235.54 MB/s	     133 B/op	       1 allocs/op
BenchmarkRingLoggerThroughputConsumeDelay300-8      	300000000	       130 ns/op	 237.30 MB/s	     133 B/op	       1 allocs/op
BenchmarkRingLoggerThroughputConsumeDelay500-8      	300000000	       128 ns/op	 240.39 MB/s	     133 B/op	       1 allocs/op

The happiest case (no receiver delay) is significantly slower. I think this is due to lock contention, but probably does not matter much unless the log driver is doing it's own queueing.
Even with a 1ms delay, throughput is back up.

Contributor

cpuguy83 commented Nov 28, 2016

Benchmarks:

BenchmarkRingLoggerThroughputNoReceiver-8           	300000000	       135 ns/op	 229.05 MB/s	     133 B/op	       1 allocs/op
BenchmarkRingLoggerThroughputWithReceiverDelay0-8   	200000000	       212 ns/op	 146.06 MB/s	     113 B/op	       1 allocs/op
BenchmarkRingLoggerThroughputConsumeDelay1-8        	300000000	       130 ns/op	 237.55 MB/s	     133 B/op	       1 allocs/op
BenchmarkRingLoggerThroughputConsumeDelay10-8       	300000000	       132 ns/op	 234.68 MB/s	     133 B/op	       1 allocs/op
BenchmarkRingLoggerThroughputConsumeDelay50-8       	300000000	       126 ns/op	 244.84 MB/s	     133 B/op	       1 allocs/op
BenchmarkRingLoggerThroughputConsumeDelay100-8      	300000000	       131 ns/op	 235.54 MB/s	     133 B/op	       1 allocs/op
BenchmarkRingLoggerThroughputConsumeDelay300-8      	300000000	       130 ns/op	 237.30 MB/s	     133 B/op	       1 allocs/op
BenchmarkRingLoggerThroughputConsumeDelay500-8      	300000000	       128 ns/op	 240.39 MB/s	     133 B/op	       1 allocs/op

The happiest case (no receiver delay) is significantly slower. I think this is due to lock contention, but probably does not matter much unless the log driver is doing it's own queueing.
Even with a 1ms delay, throughput is back up.

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Nov 29, 2016

Contributor

Updated benchmarks after fixing copy logic.
The no-delay scenario is really slow here.
Specifically, when I use a sync.Pool to handling messages when copying/done with messages, it drops from around 125MB/sec to 75MB/sec, but the other cases go from ~170MB/sec to 220MB/sec

BenchmarkRingLoggerThroughputNoReceiver-8           	300000000	       146 ns/op	 211.68 MB/s	      69 B/op	       1 allocs/op
BenchmarkRingLoggerThroughputWithReceiverDelay0-8   	100000000	       388 ns/op	  79.84 MB/s	      52 B/op	       1 allocs/op
BenchmarkRingLoggerThroughputConsumeDelay1-8        	300000000	       170 ns/op	 181.94 MB/s	      69 B/op	       1 allocs/op
BenchmarkRingLoggerThroughputConsumeDelay10-8       	300000000	       153 ns/op	 201.78 MB/s	      69 B/op	       1 allocs/op
BenchmarkRingLoggerThroughputConsumeDelay50-8       	300000000	       138 ns/op	 224.37 MB/s	      69 B/op	       1 allocs/op
BenchmarkRingLoggerThroughputConsumeDelay100-8      	300000000	       146 ns/op	 211.85 MB/s	      69 B/op	       1 allocs/op
BenchmarkRingLoggerThroughputConsumeDelay300-8      	300000000	       143 ns/op	 216.44 MB/s	      69 B/op	       1 allocs/op
BenchmarkRingLoggerThroughputConsumeDelay500-8      	300000000	       144 ns/op	 214.35 MB/s	      69 B/op	       1 allocs/op
Contributor

cpuguy83 commented Nov 29, 2016

Updated benchmarks after fixing copy logic.
The no-delay scenario is really slow here.
Specifically, when I use a sync.Pool to handling messages when copying/done with messages, it drops from around 125MB/sec to 75MB/sec, but the other cases go from ~170MB/sec to 220MB/sec

BenchmarkRingLoggerThroughputNoReceiver-8           	300000000	       146 ns/op	 211.68 MB/s	      69 B/op	       1 allocs/op
BenchmarkRingLoggerThroughputWithReceiverDelay0-8   	100000000	       388 ns/op	  79.84 MB/s	      52 B/op	       1 allocs/op
BenchmarkRingLoggerThroughputConsumeDelay1-8        	300000000	       170 ns/op	 181.94 MB/s	      69 B/op	       1 allocs/op
BenchmarkRingLoggerThroughputConsumeDelay10-8       	300000000	       153 ns/op	 201.78 MB/s	      69 B/op	       1 allocs/op
BenchmarkRingLoggerThroughputConsumeDelay50-8       	300000000	       138 ns/op	 224.37 MB/s	      69 B/op	       1 allocs/op
BenchmarkRingLoggerThroughputConsumeDelay100-8      	300000000	       146 ns/op	 211.85 MB/s	      69 B/op	       1 allocs/op
BenchmarkRingLoggerThroughputConsumeDelay300-8      	300000000	       143 ns/op	 216.44 MB/s	      69 B/op	       1 allocs/op
BenchmarkRingLoggerThroughputConsumeDelay500-8      	300000000	       144 ns/op	 214.35 MB/s	      69 B/op	       1 allocs/op
@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Nov 29, 2016

Contributor

I think I'm going make some changes to the copier.

Contributor

cpuguy83 commented Nov 29, 2016

I think I'm going make some changes to the copier.

@cpuguy83 cpuguy83 changed the title from Implement optional ring buffer for container logs to [WIP] Implement optional ring buffer for container logs Nov 29, 2016

@cpuguy83 cpuguy83 changed the title from [WIP] Implement optional ring buffer for container logs to Implement optional ring buffer for container logs Dec 9, 2016

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Dec 9, 2016

Contributor

Ok, what I've done is gone ahead and make the copier handle the allocations.
This simplifies some things for log drivers that are buffering (and I'm not confident that prior to this PR we are doing the right thing w.r.t. making copies of the log message in buffered drivers).

It evens out the performance of the ring buffer but causes more allocations and more bytes being moved per op.
Still the performance here is a bit better than before #22982.
For comparison:

Before #22982 (<= 1.12)

Size Ops Speed Memory Allocs
1K 10000 141169 ns/op 16351 B/op 39 allocs/op
2K 20000 98573 ns/op 26577 B/op 39 allocs/op
4K 10000 105285 ns/op 47058 B/op 39 allocs/op
8K 10000 138987 ns/op 129302 B/op 59 allocs/op
16K 10000 171856 ns/op 294591 B/op 100 allocs/op
32K 5000 307481 ns/op 624206 B/op 152 allocs/op
64K 2000 582192 ns/op 1283438 B/op 245 allocs/op
128K 1000 1212861 ns/op 2601906 B/op 422 allocs/op
256K 1000 2267578 ns/op 5238848 B/op 766 allocs/op

After #22982 (1.13.0-rcX)

Size Ops Speed Memory Allocs
1K 10000 101876 ns/op 17664 B/op 20 allocs/op
2K 10000 100582 ns/op 17664 B/op 20 allocs/op
4K 10000 103867 ns/op 17664 B/op 20 allocs/op
8K 10000 117245 ns/op 17664 B/op 20 allocs/op
16K 10000 120569 ns/op 17664 B/op 20 allocs/op
32K 5000 236115 ns/op 17664 B/op 20 allocs/op
64K 3000 434702 ns/op 17664 B/op 20 allocs/op
128K 2000 839920 ns/op 17664 B/op 20 allocs/op
256K 1000 1611549 ns/op 17664 B/op 20 allocs/op

This PR

Size Ops Speed Memory Allocs
1K 300000 40589 ns/op 28800 B/op 39 allocs/op
2K 300000 56270 ns/op 39040 B/op 39 allocs/op
4K 100000 129520 ns/op 59520 B/op 39 allocs/op
8K 100000 154120 ns/op 100480 B/op 39 allocs/op
16K 100000 238414 ns/op 182400 B/op 39 allocs/op
32K 30000 448035 ns/op 347200 B/op 59 allocs/op
64K 20000 809671 ns/op 676800 B/op 99 allocs/op
128K 10000 1547572 ns/op 1336000 B/op 179 allocs/op
256K 5000 3110902 ns/op 2654400 B/op 339 allocs/op
Delay Ops Speed Throughput Memory Allocs
NoReceiver 200000000 87.9 ns/op 352.59 MB/s 37 B/op 0 allocs/op
ConsumeDelay0 100000000 152 ns/op 203.78 MB/s 20 B/op 0 allocs/op
ConsumeDelay1 200000000 90.5 ns/op 342.47 MB/s 37 B/op 0 allocs/op
ConsumeDelay10 200000000 86.1 ns/op 359.98 MB/s 37 B/op 0 allocs/op
ConsumeDelay50 200000000 85.1 ns/op 364.14 MB/s 37 B/op 0 allocs/op
ConsumeDelay100 200000000 82.3 ns/op 376.85 MB/s 37 B/op 0 allocs/op
ConsumeDelay300 200000000 81.4 ns/op 380.61 MB/s 37 B/op 0 allocs/op
ConsumeDelay500 200000000 78.3 ns/op 396.16 MB/s 37 B/op 0 allocs/op
Contributor

cpuguy83 commented Dec 9, 2016

Ok, what I've done is gone ahead and make the copier handle the allocations.
This simplifies some things for log drivers that are buffering (and I'm not confident that prior to this PR we are doing the right thing w.r.t. making copies of the log message in buffered drivers).

It evens out the performance of the ring buffer but causes more allocations and more bytes being moved per op.
Still the performance here is a bit better than before #22982.
For comparison:

Before #22982 (<= 1.12)

Size Ops Speed Memory Allocs
1K 10000 141169 ns/op 16351 B/op 39 allocs/op
2K 20000 98573 ns/op 26577 B/op 39 allocs/op
4K 10000 105285 ns/op 47058 B/op 39 allocs/op
8K 10000 138987 ns/op 129302 B/op 59 allocs/op
16K 10000 171856 ns/op 294591 B/op 100 allocs/op
32K 5000 307481 ns/op 624206 B/op 152 allocs/op
64K 2000 582192 ns/op 1283438 B/op 245 allocs/op
128K 1000 1212861 ns/op 2601906 B/op 422 allocs/op
256K 1000 2267578 ns/op 5238848 B/op 766 allocs/op

After #22982 (1.13.0-rcX)

Size Ops Speed Memory Allocs
1K 10000 101876 ns/op 17664 B/op 20 allocs/op
2K 10000 100582 ns/op 17664 B/op 20 allocs/op
4K 10000 103867 ns/op 17664 B/op 20 allocs/op
8K 10000 117245 ns/op 17664 B/op 20 allocs/op
16K 10000 120569 ns/op 17664 B/op 20 allocs/op
32K 5000 236115 ns/op 17664 B/op 20 allocs/op
64K 3000 434702 ns/op 17664 B/op 20 allocs/op
128K 2000 839920 ns/op 17664 B/op 20 allocs/op
256K 1000 1611549 ns/op 17664 B/op 20 allocs/op

This PR

Size Ops Speed Memory Allocs
1K 300000 40589 ns/op 28800 B/op 39 allocs/op
2K 300000 56270 ns/op 39040 B/op 39 allocs/op
4K 100000 129520 ns/op 59520 B/op 39 allocs/op
8K 100000 154120 ns/op 100480 B/op 39 allocs/op
16K 100000 238414 ns/op 182400 B/op 39 allocs/op
32K 30000 448035 ns/op 347200 B/op 59 allocs/op
64K 20000 809671 ns/op 676800 B/op 99 allocs/op
128K 10000 1547572 ns/op 1336000 B/op 179 allocs/op
256K 5000 3110902 ns/op 2654400 B/op 339 allocs/op
Delay Ops Speed Throughput Memory Allocs
NoReceiver 200000000 87.9 ns/op 352.59 MB/s 37 B/op 0 allocs/op
ConsumeDelay0 100000000 152 ns/op 203.78 MB/s 20 B/op 0 allocs/op
ConsumeDelay1 200000000 90.5 ns/op 342.47 MB/s 37 B/op 0 allocs/op
ConsumeDelay10 200000000 86.1 ns/op 359.98 MB/s 37 B/op 0 allocs/op
ConsumeDelay50 200000000 85.1 ns/op 364.14 MB/s 37 B/op 0 allocs/op
ConsumeDelay100 200000000 82.3 ns/op 376.85 MB/s 37 B/op 0 allocs/op
ConsumeDelay300 200000000 81.4 ns/op 380.61 MB/s 37 B/op 0 allocs/op
ConsumeDelay500 200000000 78.3 ns/op 396.16 MB/s 37 B/op 0 allocs/op

@LK4D4 LK4D4 merged commit dc20f2a into moby:master Feb 1, 2017

4 checks passed

dco-signed All commits are signed
experimental Jenkins build Docker-PRs-experimental 30218 has succeeded
Details
janky Jenkins build Docker-PRs 38832 has succeeded
Details
windowsRS1 Jenkins build Docker-PRs-WoW-RS1 9873 has succeeded
Details

@cpuguy83 cpuguy83 deleted the cpuguy83:logger_ring_buffer branch Feb 2, 2017

cpuguy83 added a commit to cpuguy83/docker.github.io that referenced this pull request Feb 2, 2017

Adds details about new non-blocking logging mode
This functionality was added in moby/moby#28762.
It allows a user to specify what to do in cases when a log message can't
be sent, block the container's stdio or drop messages.

Also removes an erronious entry about a `nats` log driver, which does
not exist.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>

mistyhacks added a commit to mistyhacks/docker.github.io that referenced this pull request Feb 24, 2017

Adds details about new non-blocking logging mode
This functionality was added in moby/moby#28762.
It allows a user to specify what to do in cases when a log message can't
be sent, block the container's stdio or drop messages.

Also removes an erronious entry about a `nats` log driver, which does
not exist.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>

mistyhacks added a commit to docker/docker.github.io that referenced this pull request Mar 8, 2017

Adds details about new non-blocking logging mode
This functionality was added in moby/moby#28762.
It allows a user to specify what to do in cases when a log message can't
be sent, block the container's stdio or drop messages.

Also removes an erronious entry about a `nats` log driver, which does
not exist.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>

mistyhacks added a commit to docker/docker.github.io that referenced this pull request Mar 17, 2017

Adds details about new non-blocking logging mode
This functionality was added in moby/moby#28762.
It allows a user to specify what to do in cases when a log message can't
be sent, block the container's stdio or drop messages.

Also removes an erronious entry about a `nats` log driver, which does
not exist.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>

JimGalasyn pushed a commit to docker/docker.github.io that referenced this pull request Mar 23, 2017

Adds details about new non-blocking logging mode
This functionality was added in moby/moby#28762.
It allows a user to specify what to do in cases when a log message can't
be sent, block the container's stdio or drop messages.

Also removes an erronious entry about a `nats` log driver, which does
not exist.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>

mistyhacks added a commit to docker/docker.github.io that referenced this pull request Mar 30, 2017

Adds details about new non-blocking logging mode
This functionality was added in moby/moby#28762.
It allows a user to specify what to do in cases when a log message can't
be sent, block the container's stdio or drop messages.

Also removes an erronious entry about a `nats` log driver, which does
not exist.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>

mistyhacks added a commit to docker/docker.github.io that referenced this pull request Mar 31, 2017

Adds details about new non-blocking logging mode
This functionality was added in moby/moby#28762.
It allows a user to specify what to do in cases when a log message can't
be sent, block the container's stdio or drop messages.

Also removes an erronious entry about a `nats` log driver, which does
not exist.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>

mistyhacks added a commit to docker/docker.github.io that referenced this pull request Apr 5, 2017

Adds details about new non-blocking logging mode
This functionality was added in moby/moby#28762.
It allows a user to specify what to do in cases when a log message can't
be sent, block the container's stdio or drop messages.

Also removes an erronious entry about a `nats` log driver, which does
not exist.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>

mistyhacks added a commit to docker/docker.github.io that referenced this pull request Apr 5, 2017

Adds details about new non-blocking logging mode
This functionality was added in moby/moby#28762.
It allows a user to specify what to do in cases when a log message can't
be sent, block the container's stdio or drop messages.

Also removes an erronious entry about a `nats` log driver, which does
not exist.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>

mistyhacks added a commit to docker/docker.github.io that referenced this pull request Apr 6, 2017

Adds details about new non-blocking logging mode
This functionality was added in moby/moby#28762.
It allows a user to specify what to do in cases when a log message can't
be sent, block the container's stdio or drop messages.

Also removes an erronious entry about a `nats` log driver, which does
not exist.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
@glasser

This comment has been minimized.

Show comment
Hide comment
@glasser

glasser Jun 26, 2017

Contributor

Based on reading the source, it looks like there's a serious bug here in the journald and syslog drivers. (Haven't managed to verify yet with an actual built version.) In both of those drivers, you call logger.PutMessage(msg) and then afterwards read msg.Source. But PutMessage calls msg.reset() which clears Source. Am I missing something or do the drivers not need to save those values before calling PutMessage?

Contributor

glasser commented Jun 26, 2017

Based on reading the source, it looks like there's a serious bug here in the journald and syslog drivers. (Haven't managed to verify yet with an actual built version.) In both of those drivers, you call logger.PutMessage(msg) and then afterwards read msg.Source. But PutMessage calls msg.reset() which clears Source. Am I missing something or do the drivers not need to save those values before calling PutMessage?

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Jun 26, 2017

Contributor

@glasser PutMessage should only be called after the value has already been copied.

For instance:

line := string(msg.Source)

This copies the value of msg.Source rather than the reference to msg.Source.

Contributor

cpuguy83 commented Jun 26, 2017

@glasser PutMessage should only be called after the value has already been copied.

For instance:

line := string(msg.Source)

This copies the value of msg.Source rather than the reference to msg.Source.

@glasser

This comment has been minimized.

Show comment
Hide comment
@glasser

glasser Jun 26, 2017

Contributor

I agree. But look at this code this PR put into the syslog driver, which remains there to this day:

func (s *syslogger) Log(msg *logger.Message) error {
	line := string(msg.Line)
	logger.PutMessage(msg)
	if msg.Source == "stderr" {
		return s.writer.Err(line)
	}
	return s.writer.Info(line)
}

That sure seems to read from Source after calling PutMessage. Similar code exists in the journald driver.

Contributor

glasser commented Jun 26, 2017

I agree. But look at this code this PR put into the syslog driver, which remains there to this day:

func (s *syslogger) Log(msg *logger.Message) error {
	line := string(msg.Line)
	logger.PutMessage(msg)
	if msg.Source == "stderr" {
		return s.writer.Err(line)
	}
	return s.writer.Info(line)
}

That sure seems to read from Source after calling PutMessage. Similar code exists in the journald driver.

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Jun 26, 2017

Contributor

@glasser Yeah, that looks like an issue!
Do you want to open a PR to fix this?

Contributor

cpuguy83 commented Jun 26, 2017

@glasser Yeah, that looks like an issue!
Do you want to open a PR to fix this?

glasser added a commit to glasser/moby that referenced this pull request Jun 26, 2017

Fix stderr logging for journald and syslog
logger.PutMessage, added in moby#28762 (v17.04.0-ce), clears msg.Source. So journald
and syslog were treating stderr messages as if they were stdout.

glasser added a commit to glasser/moby that referenced this pull request Jun 26, 2017

Fix stderr logging for journald and syslog
logger.PutMessage, added in moby#28762 (v17.04.0-ce), clears msg.Source. So journald
and syslog were treating stderr messages as if they were stdout.

Signed-off-by: David Glasser <glasser@davidglasser.net>
@glasser

This comment has been minimized.

Show comment
Hide comment
@glasser

glasser Jun 26, 2017

Contributor
Contributor

glasser commented Jun 26, 2017

skuenzli added a commit to skuenzli/docker.github.io that referenced this pull request Dec 24, 2017

Document how and why to configure the `mode` and `max-buffer-size` lo…
…g options.

Documentation based on:
* implementation and comments of the [Ring Logger](https://github.com/moby/moby/blob/master/daemon/logger/ring.go#L49)
* conversation in implementing [PR 28762](moby/moby#28762)

skuenzli added a commit to skuenzli/docker.github.io that referenced this pull request Dec 25, 2017

Document how and why to configure the `mode` and `max-buffer-size` lo…
…g options.

Documentation based on:
* implementation and comments of the [Ring Logger](https://github.com/moby/moby/blob/master/daemon/logger/ring.go#L49)
* conversation in implementing [PR 28762](moby/moby#28762)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment