Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to prune image: image is in use by a container #3983

Closed
bmaupin opened this issue Sep 10, 2019 · 10 comments · Fixed by #3984
Closed

Failed to prune image: image is in use by a container #3983

bmaupin opened this issue Sep 10, 2019 · 10 comments · Fixed by #3984
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@bmaupin
Copy link

bmaupin commented Sep 10, 2019

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

I'm frequently unable to run podman prune -a because I get the following error:

$ podman image prune -a
Error: failed to prune image: Image used by 90b9fa5820c4f1a050ce1bd00f937a5cc63a5e3e4b08a52cf0333a8d9eb45f94: image is in use by a container

More info:

$ buildah containers --all
CONTAINER ID  BUILDER  IMAGE ID     IMAGE NAME                       CONTAINER NAME
a5b76fd2b773           815d76a89459 registry.example.org/containers/base-images/rhel7-mysql-80:latest mysql
348fd256a08b           cedbe259e0a0 docker.io/bmaupin/pspdev:gcc-4.6.4 thirsty_moore
815ebf0b8933           556e8499d9f9 localhost/impersonator:latest    impersonator
7c0af2f8e427     *     36bd7e8bcf55 registry.access.redhat.com/rhoar-nodejs/nodejs-8:latest nodejs-8-working-container
90b9fa5820c4     *     9b574591fcab                                  9b574591fcab120dfdcf52ecbb6354661605ea19beaadbc37579205073145673-working-container
cf2da324a492     *     36bd7e8bcf55 registry.access.redhat.com/rhoar-nodejs/nodejs-8:latest nodejs-8-working-container-1
e1e0571d438f     *     da54719fe50c                                  da54719fe50cc5fe1fedc0452b5125ec4996945e2af09e10804b75ebe478e080-working-container
bff42f40b7a8     *     0fcff1eab16c                                  0fcff1eab16c2eb86fa17044d1bbc567193e548de6460edb08eb40fde5e7e458-working-container
4dab95a379dd     *     76b9b33d5878                                  76b9b33d5878a48e4310a172685b15e74649f4fd06b40b5343395e150bcf96bb-working-container
c05010d15a76     *     36bd7e8bcf55 registry.access.redhat.com/rhoar-nodejs/nodejs-8:latest nodejs-8-working-container-2
91e1e8274022     *     da54719fe50c                                  da54719fe50cc5fe1fedc0452b5125ec4996945e2af09e10804b75ebe478e080-working-container-1
34f8f6a69f5e     *     0fcff1eab16c                                  0fcff1eab16c2eb86fa17044d1bbc567193e548de6460edb08eb40fde5e7e458-working-container-1
b3d0cf7b54d9     *     20b82b08b854                                  20b82b08b8543d5b02419498ddae1ac19b163f1db7260e9d58349138c6884041-working-container
48d592dde705     *     7030953edf66                                  7030953edf669a9fcdd40594d588ea7a6af389c65f6231ee1126066267605963-working-container
131af825d0cc           fa7398dcd3cb localhost/seed-loopback3:latest  seed-loopback3

$ podman image ls
ERRO[0000] error checking if image is a parent "7030953edf669a9fcdd40594d588ea7a6af389c65f6231ee1126066267605963": error reading image "f0c5b4e82e11c970a2f38198b2eaeacd6479b6bc12b90960887cf456ad83ae21": image not known 
ERRO[0000] error checking if image is a parent "f0c5b4e82e11c970a2f38198b2eaeacd6479b6bc12b90960887cf456ad83ae21": error reading image "f0c5b4e82e11c970a2f38198b2eaeacd6479b6bc12b90960887cf456ad83ae21": image not known 
REPOSITORY                                                       TAG         IMAGE ID       CREATED             SIZE
<none>                                                           <none>      e0052b5acc35   About an hour ago   568 MB
<none>                                                           <none>      4e13c4def0df   About an hour ago   568 MB
<none>                                                           <none>      7c5da149659b   About an hour ago   568 MB
localhost/seed-loopback3                                         latest      fa7398dcd3cb   19 hours ago        632 MB
<none>                                                           <none>      94ef4d389a1e   19 hours ago        699 MB
<none>                                                           <none>      f0c5b4e82e11   3 days ago          699 MB
<none>                                                           <none>      7030953edf66   3 days ago          699 MB
<none>                                                           <none>      493974907437   3 days ago          699 MB
localhost/impersonator                                           latest      556e8499d9f9   4 days ago          930 MB
<none>                                                           <none>      c0bcb0376d5a   4 days ago          692 MB
<none>                                                           <none>      b5769e0bd61a   4 days ago          692 MB
registry.example.org/containers/base-images/rhel7-node-oracle   8           fe5c0145a44a   5 days ago          793 MB
registry.example.org/containers/redhat/nodejs-8                 latest      36bd7e8bcf55   5 days ago          568 MB
registry.access.redhat.com/rhoar-nodejs/nodejs-8                 latest      36bd7e8bcf55   5 days ago          568 MB
registry.example.org/containers/base-images/rhel7-mysql-80      latest      815d76a89459   13 days ago         478 MB
docker.io/bmaupin/pspdev                                         gcc-4.6.4   cedbe259e0a0   2 months ago        911 MB

$ podman container ls -a
CONTAINER ID  IMAGE                                                               COMMAND               CREATED       STATUS                     PORTS                   NAMES
131af825d0cc  localhost/seed-loopback3:latest                                     node_modules/.bin...  19 hours ago  Exited (127) 19 hours ago                          seed-loopback3
815ebf0b8933  localhost/impersonator:latest                                       node_modules/.bin...  4 days ago    Exited (130) 4 days ago                            impersonator
348fd256a08b  docker.io/bmaupin/pspdev:gcc-4.6.4                                  ./libretro-buildb...  5 days ago    Created                                            thirsty_moore
a5b76fd2b773  registry.example.org/containers/base-images/rhel7-mysql-80:latest  sh -c run-mysqld ...  11 days ago   Exited (0) 5 days ago      0.0.0.0:3306->3306/tcp  mysql

Steps to reproduce the issue:

See Description

Describe the results you received:

See Description

Describe the results you expected:

I would've expected podman image prune -a to work the same way docker image prune -a does:

Remove all unused images, not just dangling ones

https://docs.docker.com/engine/reference/commandline/image_prune/#options

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

Version:            1.5.1
RemoteAPI Version:  1
Go Version:         go1.10.4
OS/Arch:            linux/amd64

Output of podman info --debug:

debug:
  compiler: gc
  git commit: ""
  go version: go1.10.4
  podman version: 1.5.1
host:
  BuildahVersion: 1.10.1
  Conmon:
    package: 'conmon: /usr/libexec/podman/conmon'
    path: /usr/libexec/podman/conmon
    version: 'conmon version 2.0.0, commit: unknown'
  Distribution:
    distribution: ubuntu
    version: "18.04"
  MemFree: 3998068736
  MemTotal: 16756355072
  OCIRuntime:
    package: 'containerd.io: /usr/bin/runc'
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc8
      commit: 425e105d5a03fabd737a126ad93d62a9eeede87f
      spec: 1.0.1-dev
  SwapFree: 8161837056
  SwapTotal: 8189374464
  arch: amd64
  cpus: 8
  eventlogger: journald
  hostname: host
  kernel: 5.0.0-27-generic
  os: linux
  rootless: true
  uptime: 88h 40m 44.79s (Approximately 3.67 days)
registries:
  blocked: null
  insecure: null
  search:
  - docker.io
store:
  ConfigFile: /home/user/.config/containers/storage.conf
  ContainerStore:
    number: 17
  GraphDriverName: vfs
  GraphOptions: null
  GraphRoot: /home/user/.local/share/containers/storage
  GraphStatus: {}
  ImageStore:
    number: 65
  RunRoot: /tmp/1000
  VolumePath: /home/user/.local/share/containers/storage/volumes

Package info (e.g. output of rpm -q podman or apt list podman):

$ apt list podman
Listing... Done
podman/bionic,now 1.5.1-1~ubuntu18.04~ppa1 amd64 [installed]

Additional environment details (AWS, VirtualBox, physical, etc.):

Physical machine running Ubuntu 18.04.

Thanks!

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Sep 10, 2019
@TomSweeneyRedHat
Copy link
Member

Thanks for the issue @bmapuin. You've added a link to #3982, I just wanted to note that further background discussion can be found there.

mheon added a commit to mheon/libpod that referenced this issue Sep 10, 2019
Podman is not the only user of containers/storage, and as such we
cannot rely on our database as the sole source of truth when
pruning images. If images do not show as in use from Podman's
perspective, but subsequently fail to remove because they are
being used by a container, they're probably being used by Buildah
or another c/storage client.

Since the images in question are in use, we shouldn't error on
failure to prune them - we weren't supposed to prune them in the
first place.

Fixes: containers#3983

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
@mheon
Copy link
Member

mheon commented Sep 10, 2019

#3984 to fix

@bmaupin
Copy link
Author

bmaupin commented Sep 12, 2019

I don't understand how I'm the first one to encounter this issue. Am I the only podman user using both podman build (I haven't ever explicitly used buildah) and podman prune -a? So bizarre!

@mheon
Copy link
Member

mheon commented Sep 12, 2019

From issue reports I've seen, most people are using podman build which cleans up its containers automatically as builds complete... But you're right, it does seem unlikely that nobody is using prune on a system with Buildah or CRI-O installed.

@ecs-hk
Copy link

ecs-hk commented Sep 27, 2019

I don't understand how I'm the first one to encounter this issue. Am I the only podman user using both podman build (I haven't ever explicitly used buildah) and podman prune -a? So bizarre!

You're not alone.

[jilin]~> podman container list
CONTAINER ID  IMAGE  COMMAND  CREATED  STATUS  PORTS  NAMES

[jilin]~> podman image prune
Error: failed to prune image: Image used by 62d0dcaf932e512f8f57d728adb3cd7cf00027ec03deb9d254f53071bd2c1ec4: image is in use by a container

[jilin]~> podman container kill 62d0dcaf932e512f8f57d728adb3cd7cf00027ec03deb9d254f53071bd2c1ec4
Error: no container with name or ID 62d0dcaf932e512f8f57d728adb3cd7cf00027ec03deb9d254f53071bd2c1ec4 found: no such container

Pretty mysterious. (I never use buildah either.)

@qhaas
Copy link

qhaas commented Jan 3, 2020

Ran into this today, given that buildah/podman are typically used together (iirc, the latter calls the former's libraries when building images), figured that something was amiss with the former when I was seeing this behavior with the latter:

podman image prune -a
Error: failed to prune image: Image used by 693470ed1189ed805908dca2ea408fff862f14d0d5832ff71c41f1de5b005748: image is in use by a container

However, podman images -a and podman ps -a showed nothing. I then checked buildah with buildah containers -a and buildah images -a and found plenty there. The solution? I just cleared out everything buildah was aware of with buildah rm -a and buildah rmi -a, after that, podman system prune -a and podman image prune -a worked like a charm.

@acidghost
Copy link

This seems to have happened to me after interrupting (via Ctrl-c) a parallel build with podman build --jobs=0 ....

For reference, the buildah rm -a solved the issue.

@wieck
Copy link

wieck commented Feb 17, 2022

I get the same error and don't even have buildah installed.

@florian-s-code
Copy link

Try the command
podman ps -a --external
The missing container(s) will appear. I am not sure where they came from but here they are.

You can also check directly in /var/lib/containers/storage/overlay-containers/.

@rhatdan
Copy link
Member

rhatdan commented Feb 21, 2022

Usually external containers come from broken podman builds or running buildah.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 20, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 20, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants