Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

allow to load container images from stdin #2041

Closed
wants to merge 1 commit into from

Conversation

aojea
Copy link
Contributor

@aojea aojea commented Jan 28, 2021

add the option to pipe files to the kind load image-archive,
specifying "-" as the archive name.

This allows the users to load any kind of images using the common
container providers export tools, i.e. podman and docker save, or
more specialized tools like skopeo.

Fixes: #2038

add the option to pipe files to the kind load image-archive,
specifying "-" as the archive name.

This allows the users to load any kind of images using the common
container providers export tools, i.e. podman and docker save, or
 more specialized tools like skopeo.
@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Jan 28, 2021
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: aojea

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added approved Indicates a PR has been approved by an approver from all required OWNERS files. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Jan 28, 2021
@aojea aojea mentioned this pull request Jan 28, 2021
@aojea
Copy link
Contributor Author

aojea commented Jan 28, 2021

ok, this is interesting, there is a deadlock somewhere with podman, it hangs if I do

` sudo KIND_EXPERIMENTAL_PROVIDER=podman sh -c 'podman save localhost/agnhost:1 | ./kind load image-archive -'
using podman due to KIND_EXPERIMENTAL_PROVIDER
enabling experimental podman provider
Copying blob 721384ec99e5 [>-------------------------------------] 162.5KiB / 4.1MiB
  86292 pts/5    S+     0:00  |   |   |   |   \_ sudo KIND_EXPERIMENTAL_PROVIDER=podman sh -c podman save localhost/agnhost:1 | ./kind load image-archive -
  86293 pts/5    S+     0:00  |   |   |   |       \_ sh -c podman save localhost/agnhost:1 | ./kind load image-archive -
  86294 pts/5    Sl+    0:00  |   |   |   |           \_ podman save localhost/agnhost:1
  86301 pts/5    Sl+    0:00  |   |   |   |               \_ podman ps -a --filter label=io.x-k8s.kind.cluster=kind --format {{.Names}}

but it also doesn´t allow me to execute new podman commands 🤔 it is possible that podman doesn´t allow concurrent calls or if this calls involves storage? @mrunalp @mheon

this is the strace of the hanging command, it only happens if I use sudo by the way

read(9, "", 1528)                       = 0
epoll_ctl(4, EPOLL_CTL_DEL, 9, 0xc00040f28c) = 0
close(9)                                = 0
setrlimit(RLIMIT_NPROC, {rlim_cur=4096*1024, rlim_max=4096*1024}) = 0
newfstatat(AT_FDCWD, "/usr/share/containers/libpod.conf", 0xc000481078, 0) = -1 ENOENT (No such file or directory)
newfstatat(AT_FDCWD, "/etc/containers/libpod.conf", 0xc000481148, 0) = -1 ENOENT (No such file or directory)
newfstatat(AT_FDCWD, "/usr/share/containers/containers.conf", {st_mode=S_IFREG|0644, st_size=14893, ...}, 0) = 0
newfstatat(AT_FDCWD, "/etc/containers/containers.conf", 0xc0004812e8, 0) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/usr/share/containers/containers.conf", O_RDONLY|O_CLOEXEC) = 9
epoll_ctl(4, EPOLL_CTL_ADD, 9, {EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, {u32=2932330344, u64=140435478007656}}) = -1 EPERM (Operation not permitted)
epoll_ctl(4, EPOLL_CTL_DEL, 9, 0xc00040f244) = -1 EPERM (Operation not permitted)
fstat(9, {st_mode=S_IFREG|0644, st_size=14893, ...}) = 0
read(9, "# The containers configuration f"..., 15405) = 14893
read(9, "", 512)                        = 0
close(9)                                = 0
geteuid()                               = 0
newfstatat(AT_FDCWD, "/etc/containers/storage.conf", {st_mode=S_IFREG|0644, st_size=7803, ...}, 0) = 0
newfstatat(AT_FDCWD, "/etc/containers/storage.conf", {st_mode=S_IFREG|0644, st_size=7803, ...}, 0) = 0
futex(0xc00005c848, FUTEX_WAKE_PRIVATE, 1) = 1
rt_sigprocmask(SIG_SETMASK, ~[HUP INT QUIT ILL TRAP ABRT BUS FPE SEGV TERM STKFLT CHLD PROF SYS RTMIN RT_1], NULL, 8) = 0
rt_sigprocmask(SIG_SETMASK, ~[HUP INT QUIT ILL TRAP ABRT BUS FPE SEGV TERM STKFLT CHLD PROF SYS RTMIN RT_1], NULL, 8) = 0
futex(0xc00005cbc8, FUTEX_WAKE_PRIVATE, 1) = 1
rt_sigprocmask(SIG_SETMASK, ~[HUP INT QUIT ILL TRAP ABRT BUS FPE SEGV TERM STKFLT CHLD PROF SYS RTMIN RT_1], NULL, 8) = 0
futex(0xc00005cbc8, FUTEX_WAKE_PRIVATE, 1) = 1
futex(0x55d9268a0508, FUTEX_WAIT_PRIVATE, 0, NULL) = -1 EAGAIN (Resource temporarily unavailable)
futex(0x55d9268a0508, FUTEX_WAIT_PRIVATE, 0, NULL

@BenTheElder
Copy link
Member

In a shell I currently recommend using command substitution which is buffered IIRC kind load image-archive <(podman save hello-world:foo)

@mheon
Copy link

mheon commented Jan 28, 2021

@aojea Does ./kind load image-archive invoke any Podman commands?

@aojea
Copy link
Contributor Author

aojea commented Jan 28, 2021

@aojea Does ./kind load image-archive invoke any Podman commands?

yeah, it invokes podman ps -a ...

\_ sh -c podman save localhost/agnhost:1 | ./kind load image-archive -
  86294 pts/5    Sl+    0:00  |   |   |   |           \_ podman save localhost/agnhost:1
  86301 pts/5    Sl+    0:00  |   |   |   |               \_ podman ps -a --filter label=io.x-k8s.kind.cluster=kind --format {{.Names}}

@aojea
Copy link
Contributor Author

aojea commented Feb 3, 2021

/hold

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Feb 3, 2021
@mheon
Copy link

mheon commented Feb 3, 2021

@aojea That does sound like something that could run afoul of the locking in c/storage - most of our complaints there are around Podman blocking until an image pull is complete, but saving an image sounds like a very similar codepath. @saschagrunert has been working on a PR to improve the locking situation but I'm not certain it would resolve this case.

@aojea
Copy link
Contributor Author

aojea commented Feb 3, 2021

@aojea That does sound like something that could run afoul of the locking in c/storage - most of our complaints there are around Podman blocking until an image pull is complete, but saving an image sounds like a very similar codepath. @saschagrunert has been working on a PR to improve the locking situation but I'm not certain it would resolve this case.

I opened the issue in podman so it doesn´t get lost in this PR, will ping Sascha there,
thanks for following up Matthew

@aojea
Copy link
Contributor Author

aojea commented Feb 3, 2021

ahh I see the issue was closed, :) let me read it

@vrothberg
Copy link

@aojea That does sound like something that could run afoul of the locking in c/storage - most of our complaints there are around Podman blocking until an image pull is complete, but saving an image sounds like a very similar codepath. @saschagrunert has been working on a PR to improve the locking situation but I'm not certain it would resolve this case.

I don't think there's a deadlock in c/storage. The ouput of podman save must be consumed, otherwise the first layer's lock is held since it's streaming directly from the storage.

podman save foo | podman load is no problem since the data is read. podman save foo | podman ps ... is a problem since the data isn't consumed.

@aojea
Copy link
Contributor Author

aojea commented Feb 3, 2021

naive question and just for curiosity, why podman ps has to acquire the lock?

@vrothberg
Copy link

naive question and just for curiosity, why podman ps has to acquire the lock?

It needs to access the storage to list containers. We are working on improving the locking and optimize certain paths to be accessed concurrently but some command combinations may still yield such livelocks. podman save is especially interesting since it's streaming directly from the storage. We could work around that by first copying out the entire layer but that would be a huge performance penalty.

I think we should clarify that in the man pages and help message.

@aojea
Copy link
Contributor Author

aojea commented Feb 3, 2021

it´s ok, this is working as expected then.
I was wondering if this was something that will be solved in the future ... @vrothberg should skopeo do the trick? any suggestions?

@vrothberg
Copy link

it´s ok, this is working as expected then.
I was wondering if this was something that will be solved in the future ... @vrothberg should skopeo do the trick? any suggestions?

skopeo and podman share the same code. The output from podman save must be read before running another podman command in the chained pipe. Are the pipes needed for a specific reason?

$ podman save $image -o $file
# Lot's of work
$ load-image $file

would that ^ work?

@aojea
Copy link
Contributor Author

aojea commented Feb 4, 2021

Are the pipes needed for a specific reason?

laziness???? 😄

> $ podman save $image -o $file
# Lot's of work
$ load-image $file

That will work, actually, this is how this was implemented
Thanks Valentin

@aojea aojea closed this Feb 4, 2021
@vrothberg
Copy link

Excellent! Thanks a lot, @aojea 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

unified image loading
5 participants