Skip to content

Specifying chunk_size when saving images results in a max chunk of size 32768, with wrap-around #2122

@gooosetavo

Description

@gooosetavo

The docstring in docker.models.images.Image#save states that 'data will be streamed as it is received. Default: 2 MB'. This default is from docker.constants#DEFAULT_DATA_CHUNK_SIZE which is set to 1024 * 2048, or 2MB.

However, when iterating through the 'chunked' results, each chunk is cut off at 32768 bytes per, no matter what the chunk_size setting is.

# using default of "2 MB"
import docker
d = docker.from_env() 
for chunk in d.images.get('alpine:3.7').save():
    print("chunk of size {}".format(len(chunk)))
# >chunk of size 32768
# >chunk of size 32768
# ... 
# continues to iterate until hitting image size ~4 MB
...
# >chunk of size 27136
# ^ last remaining chunk 

# using a smaller size than 32768
import docker
d = docker.from_env() 
for chunk in d.images.get('alpine:3.7').save(chunk_size=32724): # 40 less bytes than max
    print("chunk of size {}".format(len(chunk)))
# >chunk of size 32728
# >chunk of size 40
# >chunk of size 32728
# >chunk of size 40
# ... 
# wrap-around continues 
# ...

Since the example given in the docstring concatenates the chunks into a file, this behavior doesn't cause issues there. However, if anyone actually wants to specify a chunk size, such as to encrypt the chunks into a file, they'll be in for a surprise with the wrap-around behavior.

I'm guessing the hard limit is due to response size constraints in either the Docker client or server, but either way this behavior should be addressed. Either docs reflect the actual behavior or the client api should pipe the results as described. Unless I'm missing something here... which is totally possible.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions