Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Saving of scratch volume image not possible: ImageError response from daemon: empty export - not implemented #38039

Open
countzero opened this issue Oct 15, 2018 · 7 comments
Labels
area/images kind/enhancement

Comments

@countzero
Copy link

@countzero countzero commented Oct 15, 2018

Description

The docker save command will not save an image that contains only a FROM scratch and a VOLUME declaration. It fails with:

Error response from daemon: empty export - not implemented

Steps to reproduce the issue:

  1. Create a Dockerfile
cat <<EOF > Dockerfile
FROM scratch
VOLUME /var/log
EOF
  1. Create an image from that Dockerfile
docker build -t emptyscratchvolume .
  1. Try to save that image with docker save into a .tar file
docker save -o emptyscratchvolume.tar emptyscratchvolume

Describe the results you received:

It fails with:

Error response from daemon: empty export - not implemented

Describe the results you expected:

The docker save command should produce a .tar file that will be compatible with docker load.

Additional information you deem important (e.g. issue happens only occasionally):

The docker save command will work as expected if the image contains at least one file, like the Dockerfile itself:

cat <<EOF > Dockerfile
FROM scratch
COPY Dockerfile .
VOLUME /var/log
EOF

Output of docker version:

Client:
 Version:       18.03.0-ce
 API version:   1.37
 Go version:    go1.9.4
 Git commit:    0520e24
 Built: Wed Mar 21 23:10:06 2018
 OS/Arch:       linux/amd64
 Experimental:  false
 Orchestrator:  swarm

Server:
 Engine:
  Version:      18.03.0-ce
  API version:  1.37 (minimum version 1.12)
  Go version:   go1.9.4
  Git commit:   0520e24
  Built:        Wed Mar 21 23:08:35 2018
  OS/Arch:      linux/amd64
  Experimental: false

Output of docker info:

Containers: 7
 Running: 3
 Paused: 0
 Stopped: 4
Images: 197
Server Version: 18.03.0-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: cfd04396dc68220d1cecbe686a6cc3aa5ce3667c
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
 seccomp
  Profile: default
Kernel Version: 4.9.0-6-amd64
Operating System: Debian GNU/Linux 9 (stretch)
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 7.8GiB
Name: c3-development
ID: 5KME:NHKR:SAQH:EVTU:M5Y6:DKML:NU45:7USQ:2PAP:2DAC:ZR5I:QM2K
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

Additional environment details (AWS, VirtualBox, physical, etc.):

@thaJeztah
Copy link
Member

@thaJeztah thaJeztah commented Oct 19, 2018

Error comes from here;

return nil, fmt.Errorf("empty export - not implemented")

@thaJeztah
Copy link
Member

@thaJeztah thaJeztah commented Oct 19, 2018

/cc @tonistiigi perhaps you know if there's a specific reason this was not implemented

@tonistiigi
Copy link
Member

@tonistiigi tonistiigi commented Oct 19, 2018

I guess it was for the v1.0 transport format compatibility where every layer is mapped to an image config and therefore the concept of an image without layers doesn't really exist. With the containerd implementation #38043 v1.0 save support should go away and solve this.

@thaJeztah thaJeztah added kind/enhancement area/images and removed area/builder labels Oct 19, 2018
@thaJeztah
Copy link
Member

@thaJeztah thaJeztah commented Oct 19, 2018

Thanks @tonistiigi - makes sense

@countzero wondering; is your example just an "example" or is that your actual use case? From your example, it looks like you're using "data containers" so that you can reference volumes. The concept of using "data containers" is really outdated, now that volume management commands are available; for example, to create a named volume for your logs;

docker volume create mylogvolume

That volume can then be attached to a container;

docker run -d --volume mylogvolume:/var/log nginx:alpine

Named volumes will also be automatically created on-the-fly if they don't exist, so running;

docker run -d --volume makemeavolume:/var/log nginx:alpine

Will create a volume named makemeavolume if it doesn't exist yet, and otherwise use the existing makemeavolume volume; after that, the container is created, and the volume is attached to the container.

@countzero
Copy link
Author

@countzero countzero commented Oct 27, 2018

@thaJeztah Your assumption is correct. I am currently refactoring "legacy" code that dates back to the time when Docker was not yet production ready (<v1.0). I am currently refactoring that code base and just replaced FROM busybox with FROM scratch, which lead to this bugreport.

Interestingly the Docker CLI and behaviour, that was present before ~4 years still works as expected. The infrastructure I am working on is controlled with simple Bash scripts. Those could nowadays be replaced by Docker compose or Docker swarm or Kubernetes or... you get the point ;D

@ghost
Copy link

@ghost ghost commented Jan 16, 2020

I found this bug "in the wild" when I tried to use the Docker layer cache feature of the Bitbucket pipelines CI system.

One of my Dockerfiles contains FROM scratch, producing an image with an empty layer.
Because the CI system tries to export ALL the images after the build for caching, even the empty one, this bug gets triggered.

mudler added a commit to mudler/luet that referenced this issue Feb 9, 2021
We used to create dockerfiles blindly assuming there is content, but
that's not the case for virtual packages.

Due to moby/moby#38039 we are forced for a
"unpleasant" workaround, as we can't create empty FROM scratch images
and export them.
@mudler
Copy link

@mudler mudler commented Feb 9, 2021

Are there any plans into fixing this? I've bumped into this myself, and I find this behavior a bit "incoherent" : you can create images FROM scratch with just metadata (e.g. by setting LABELS) so you can build empty images, but later on you can't export them.

In my use case I use "data containers" because I'm using docker images to carry data, and in some corner cases I have "empty" images to push and pull from. I can't use volumes, because such images aren't necessarily supposed to be run.

mudler added a commit to mudler/luet that referenced this issue Feb 9, 2021
We used to create dockerfiles blindly assuming there is content, but
that's not the case for virtual packages.

Due to moby/moby#38039 we are forced for a
"unpleasant" workaround, as we can't create empty FROM scratch images
and export them.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/images kind/enhancement
Projects
None yet
Development

No branches or pull requests

5 participants