Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker logging plugins and error handling #40623

Closed
FedericoPonzi opened this issue Mar 4, 2020 · 3 comments
Closed

Docker logging plugins and error handling #40623

FedericoPonzi opened this issue Mar 4, 2020 · 3 comments

Comments

@FedericoPonzi
Copy link

FedericoPonzi commented Mar 4, 2020

Description
Looking at the provided example plugin, there is little error handling, and the webpage doesn't contain much useful information in this regard.

I was wondering what would be the suggested error handling flow:

  • If for any reason I get desynchronized with the stream, what should I do?
  • If I close my side of the fifo can I expect docker to send another /StartLogging request?
  • Since the [size] is 8 uint32 implies that max size of a LogEntry can be up to 4Gbs. Is docker going to send me log messages of this size? I don't think so because of the truncated field, then what's the maximum size docker is going to send per log entry?

Basically I fear some sort of desynchronization with the logging stream, that might lead to a [size] of 4GBs. So I would like to use the max size for the log entry as a guard. My issue though is obviously that if I wish to not stop logging, so I'm wondering if there is a way to magically synchronize again after losing the sync.

Thanks!

@cpuguy83
Copy link
Member

cpuguy83 commented Mar 5, 2020

if for any reason I get desynchronized with the stream, what should I do?

I'm not sure what you mean here.
The FIFO is a a pipe buffer. If data is written on one end it must be read on the other or it will block the writer (once the buffer is full).
If the buffer fills, the container will be blocked on writing to stdio.

If I close my side of the fifo can I expect docker to send another /StartLogging request?

If you've closed it, then it's up to you to re-open.

then what's the maximum size docker is going to send per log entry?

Docker's log handling is line based. Lines are truncated if they are larger than 1MB.
Given that there is some extra data that goes along with a log line it is possible that the log entry is just over 1MB.
The log buffer may have multiple log entries in it, but each entry will be framed with the size of the entry.

@FedericoPonzi
Copy link
Author

Thanks for the detailed answer!

if for any reason I get desynchronized with the stream, what should I do?

I'm not sure what you mean here.
The FIFO is a a pipe buffer. If data is written on one end it must be read on the other or it will block the writer (once the buffer is full).
If the buffer fills, the container will be blocked on writing to stdio.

If I close my side of the fifo can I expect docker to send another /StartLogging request?

If you've closed it, then it's up to you to re-open.

then what's the maximum size docker is going to send per log entry?

Docker's log handling is line based. Lines are truncated if they are larger than 1MB.
Given that there is some extra data that goes along with a log line it is possible that the log entry is just over 1MB.
The log buffer may have multiple log entries in it, but each entry will be framed with the size of the entry.

If for example there is a bug in the protobuf library (I'm working in rust so it's still a young ecosystem) that makes it read one byte more then requested, I might end up reading part of the length of the followup [size].
I can figure by setting a reasonable size limit (say 2MB). What should I do if I discover that there is such issue? Just give up all together reading from the stream? That would cause weird issues on the container side given what you said about blocking the stdout write, right?

@cpuguy83
Copy link
Member

cpuguy83 commented Mar 6, 2020

If you read too much from the stream you'll need to manage what chunk of data you are actually wanting to de-serialize and keep the rest of the data in a buffer that you can pull out for the next read.

@cpuguy83 cpuguy83 closed this as completed Mar 8, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants