Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tar 1.35.1 (rawhide): podman import, with modified tar: duplicates of file paths not supported #19407

Closed
edsantiago opened this issue Jul 27, 2023 · 9 comments · Fixed by #21271
Labels
locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@edsantiago
Copy link
Collaborator

Reproducer (copy/paste/run):

#!/bin/bash

i=quay.io/libpod/testimage:20221018

bin/podman create --name a $i
bin/podman export -o /tmp/a.tar a
tar --delete -f /tmp/a.tar home/podman/pause
bin/podman import /tmp/a.tar b

(I used podman-from-source for convenience, but also fails with podman-4.5-from-rpm).

Import barfs with:

e31ce32f59ee1840830d534efa2fa4f38473b2e56c7410834523a5c554b1719a
Getting image source signatures
Copying blob 51098d81131b done   | 
Error: writing blob: adding layer with blob "sha256:51098d81131bac1a1a8539c68e4389e15fe1a800e4922eecd9cb11b0f51d9386": processing tar file(): duplicates of file paths not supported

tar-1.35-1.fc39.x86_64, works perfectly fine with 1.34-8.fc39

Almost certainly a tar bug, not podman, but (1) tracking here because it is blocking #18612, and (2) I have no idea what podman is doing with tar, or how to write a tar reproducer. I leave that to someone familiar with podman import.

@nalind
Copy link
Member

nalind commented Jul 31, 2023

This appears to be a bug in tar. The reproducer is:

podman create --name foo quay.io/libpod/testimage:20221018
podman export -f /tmp/a.tar foo
tar --delete home/podman/pause -f /tmp/a.tar
podman import -f /tmp/a.tar

Running tar tf on /tmp/a.tar displays unexpected changes. Running tar tf on the file, both before and after, and diffing the outputs, should help to clarify them. Running tar --delete as a filter, with stdin redirected from the file produced by podman export, and with stdout redirected to a file which will later be fed to podman import, seems to work around it.

@edsantiago
Copy link
Collaborator Author

Recipe: run the above podman export, then tar tf. Run tar --delete, and once again tar tf.

With tar 1.34
@@ -234,3 +234,2 @@
 home/podman/
-home/podman/pause
 home/podman/testimage-id
@@ -770,5 +769,2 @@
 usr/share/apk/keys/x86_64/alpine-devel@lists.alpinelinux.org-5261cecb.rsa.pub
-usr/share/apk/keys/x86_64/alpine-devel@lists.alpinelinux.org-6165ee59.rsa.pub
-usr/share/man/
-usr/share/misc/
 usr/share/udhcpc/
With tar 1.35.1
@@ -234,3 +234,2 @@
 home/podman/
-home/podman/pause
 home/podman/testimage-id
@@ -773,2 +772,14 @@
 usr/share/misc/
+usr/share/man/
+usr/share/misc/
+usr/share/udhcpc/
+usr/share/udhcpc/default.script
+var/
+var/cache/
+var/cache/apk/
+var/cache/misc/
+var/empty/
+var/lib/
+var/lib/apk/
+var/lib/arpd/
 usr/share/udhcpc/

Looks like 1.34 has a bug where it deletes random entries (symlinks maybe?). I am not concerned about that.

1.35, though, adds duplicates. This is what then causes podman (presumably the go tar library) to barf with "duplicates not allowed".

I can't make a simple reproducer. Ideally I could file a tar bug with a recipe for tar cf, tar --delete, boom. But I can't. I can't even make a reproducer using the extracted image:

# tar xf /tmp/podman-exported-image.tar
# tar cf /tmp/foo.tar .
# tar --delete ...
# tar tf   <----- deletes only what I ask, does not add any new duplicates

Giving up for today. Here's the tar git repo should anyone feel inclined to bisect.

cevich added a commit that referenced this issue Aug 1, 2023
Ref: #19407

Signed-off-by: Chris Evich <cevich@redhat.com>
cevich added a commit that referenced this issue Aug 1, 2023
Ref: #19407

Signed-off-by: Chris Evich <cevich@redhat.com>
@edsantiago
Copy link
Collaborator Author

Filed bz2230127

@edsantiago
Copy link
Collaborator Author

The cause is e89c7a4.

@cevich
Copy link
Member

cevich commented Aug 11, 2023

Oh wow @edsantiago you figured it out! Nice job! I don't think anybody would have followed through on that for maybe weeks or months. Good on you 👍

@rhatdan
Copy link
Member

rhatdan commented Aug 16, 2023

Can we close this or do we need to wait for the package to get updated on Rawhide?

@edsantiago
Copy link
Collaborator Author

Package was not in updates-testing this morning, but it's out of bodhi so I guess it's safe to close.

@cevich
Copy link
Member

cevich commented Aug 16, 2023

FWIW, I just finished building & releasing updated CI VM images this afternoon. Not sure if they include this new tar or not: containers/automation_images#294

@edsantiago
Copy link
Collaborator Author

If podman CI passes, tar is good.

edsantiago added a commit to edsantiago/libpod that referenced this issue Aug 30, 2023
Fix unquoted string vars. Something like this:

   is $output "what we expect"

...will fail with a misleading error message if $output is "".

Also:
 - remove skip for containers#19407 (tar on rawhide, fixed)
 - fix typos in a diagnostic, which was causing unhelpful message
   on failure

Signed-off-by: Ed Santiago <santiago@redhat.com>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Nov 15, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 15, 2023
edsantiago added a commit to edsantiago/libpod that referenced this issue Jan 16, 2024
- containers#15074 ("subtree_control" flake). The flake is NOT FIXED, I
  saw it six months ago on my (non-aarch64) laptop. However,
  it looks like the frequent-flake-on-aarch64 bug is resolved.
  I've been testing in containers#17831 and have not seen it. So,
  tentatively remove the skip and see what happens.

- Closes: containers#19407 (broken tar, "duplicates of file paths")
  All Fedoras now have a fixed tar. Debian DOES NOT, but
  we're handling that in our build-ci-vm code. I.e., the
  Debian VM we're using has a working tar even though there's
  currently a broken tar out in the wild.

  Added distro-integration tag so we can catch future problems
  like this in OpenQA.

- Closes: containers#19471 (brq / blkio / loopbackfs in rawhide)
  Bug appears to be fixed in rawhide, at least in the VMs we're
  using now.

  Added distro-integration tag because this test obviously
  relies on other system stuff.

Signed-off-by: Ed Santiago <santiago@redhat.com>
edsantiago added a commit to edsantiago/libpod that referenced this issue Jan 16, 2024
- containers#15074 ("subtree_control" flake). The flake is NOT FIXED, I
  saw it six months ago on my (non-aarch64) laptop. However,
  it looks like the frequent-flake-on-aarch64 bug is resolved.
  I've been testing in containers#17831 and have not seen it. So,
  tentatively remove the skip and see what happens.

- Closes: containers#19407 (broken tar, "duplicates of file paths")
  All Fedoras now have a fixed tar. Debian DOES NOT, but
  we're handling that in our build-ci-vm code. I.e., the
  Debian VM we're using has a working tar even though there's
  currently a broken tar out in the wild.

  Added distro-integration tag so we can catch future problems
  like this in OpenQA.

- Closes: containers#19471 (brq / blkio / loopbackfs in rawhide)
  Bug appears to be fixed in rawhide, at least in the VMs we're
  using now.

  Added distro-integration tag because this test obviously
  relies on other system stuff.

Signed-off-by: Ed Santiago <santiago@redhat.com>
edsantiago added a commit to edsantiago/libpod that referenced this issue Jan 16, 2024
- containers#15074 ("subtree_control" flake). The flake is NOT FIXED, I
  saw it six months ago on my (non-aarch64) laptop. However,
  it looks like the frequent-flake-on-aarch64 bug is resolved.
  I've been testing in containers#17831 and have not seen it. So,
  tentatively remove the skip and see what happens.

- Closes: containers#19407 (broken tar, "duplicates of file paths")
  All Fedoras now have a fixed tar. Debian DOES NOT, but
  we're handling that in our build-ci-vm code. I.e., the
  Debian VM we're using has a working tar even though there's
  currently a broken tar out in the wild.

  Added distro-integration tag so we can catch future problems
  like this in OpenQA.

- Closes: containers#19471 (brq / blkio / loopbackfs in rawhide)
  Bug appears to be fixed in rawhide, at least in the VMs we're
  using now.

  Added distro-integration tag because this test obviously
  relies on other system stuff.

Signed-off-by: Ed Santiago <santiago@redhat.com>
edsantiago added a commit to edsantiago/libpod that referenced this issue Jan 16, 2024
- containers#15074 ("subtree_control" flake). The flake is NOT FIXED, I
  saw it six months ago on my (non-aarch64) laptop. However,
  it looks like the frequent-flake-on-aarch64 bug is resolved.
  I've been testing in containers#17831 and have not seen it. So,
  tentatively remove the skip and see what happens.

- Closes: containers#19407 (broken tar, "duplicates of file paths")
  All Fedoras now have a fixed tar. Debian DOES NOT, but
  we're handling that in our build-ci-vm code. I.e., the
  Debian VM we're using has a working tar even though there's
  currently a broken tar out in the wild.

  Added distro-integration tag so we can catch future problems
  like this in OpenQA.

- Closes: containers#19471 (brq / blkio / loopbackfs in rawhide)
  Bug appears to be fixed in rawhide, at least in the VMs we're
  using now.

  Added distro-integration tag because this test obviously
  relies on other system stuff.

Signed-off-by: Ed Santiago <santiago@redhat.com>
edsantiago added a commit to edsantiago/libpod that referenced this issue Jan 17, 2024
- containers#15074 ("subtree_control" flake). The flake is NOT FIXED, I
  saw it six months ago on my (non-aarch64) laptop. However,
  it looks like the frequent-flake-on-aarch64 bug is resolved.
  I've been testing in containers#17831 and have not seen it. So,
  tentatively remove the skip and see what happens.

- Closes: containers#19407 (broken tar, "duplicates of file paths")
  All Fedoras now have a fixed tar. Debian DOES NOT, but
  we're handling that in our build-ci-vm code. I.e., the
  Debian VM we're using has a working tar even though there's
  currently a broken tar out in the wild.

  Added distro-integration tag so we can catch future problems
  like this in OpenQA.

- Closes: containers#19471 (brq / blkio / loopbackfs in rawhide)
  Bug appears to be fixed in rawhide, at least in the VMs we're
  using now.

  Added distro-integration tag because this test obviously
  relies on other system stuff.

Signed-off-by: Ed Santiago <santiago@redhat.com>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants