New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

logs trim data more than 16k #1696

Open
guyisra opened this Issue Aug 1, 2017 · 7 comments

Comments

Projects
None yet
4 participants
@guyisra

guyisra commented Aug 1, 2017

When reading the logs of the container, if line is longer than 16KB it gets cut off.

Is this a known issue?

@davidCarlos

This comment has been minimized.

davidCarlos commented Aug 3, 2017

I'm having the same problem. In a long log message, part of it is not getting printed to stdout.

@shin-

This comment has been minimized.

Member

shin- commented Aug 3, 2017

Does it happen with the CLI as well? When you curl the API endpoint?

@guyisra

This comment has been minimized.

guyisra commented Aug 7, 2017

using docker logs doesn't reproduce the issue, only using docker-py

@shin-

This comment has been minimized.

Member

shin- commented Aug 7, 2017

@guyisra What version of the library are you using?

@guyisra

This comment has been minimized.

guyisra commented Aug 10, 2017

version 1.10.6

@shin-

This comment has been minimized.

Member

shin- commented Aug 10, 2017

Oh, that's an old version. Improvements to log streaming have been made in 2.x, upgrading will probably help if you can do so.

@keis

This comment has been minimized.

keis commented Aug 16, 2017

What I'm seeing at docker==2.4.2 is output chunked at the 16k boundary. So a longer message is yielded out of logs(stream=True) as multiple 16k messages.

Can this be tweaked?
Is it safe to assume 16k chunks can be rebuffered by the consumer?

edit: this 16k chunk size seems to be what is given by the daemon in _multiplexed_response_stream_helper

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment