Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

crictl images can not find local image loaded by podman #474

Closed
vanloswang opened this issue Jun 19, 2019 · 16 comments
Closed

crictl images can not find local image loaded by podman #474

vanloswang opened this issue Jun 19, 2019 · 16 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@vanloswang
Copy link

vanloswang commented Jun 19, 2019

Just take a look at the following commands and the result:

# podman images localhost/vanlos/vanlos-builder:latest
REPOSITORY                                                 TAG        IMAGE ID       CREATED       SIZE
localhost/vanlos/vanlos-builder   latest     efe2b0dfd8b8   6 days ago    601 MB
# podman images vanlos/vanlos-builder:latest
REPOSITORY                                                 TAG        IMAGE ID       CREATED       SIZE
localhost/vanlos/vanlos-builder   latest     efe2b0dfd8b8   6 days ago    601 MB
# crictl images localhost/vanlos/vanlos-builder:latest
REPOSITORY                                                 TAG        IMAGE ID       CREATED       SIZE
localhost/vanlos/vanlos-builder   latest     efe2b0dfd8b8   6 days ago    601 MB
# crictl images vanlos/vanlos-builder:latest
IMAGE               TAG                 IMAGE ID            SIZE

crictl images will not find local image without the prefix of 'localhost/', then it will make k8s always try to pull image to start a pod, but the image is locally.

My OS is CentOS 7.5, and cri utils infomations are as following:

# crictl version
Version:  0.1.0
RuntimeName:  cri-o
RuntimeVersion:  1.11.10
RuntimeApiVersion:  v1alpha1
# rpm -qa | grep cri-tools
cri-tools-1.11.1-1.rhaos3.11.gitedabfb5.el7.x86_64.rpm
# rpm -qa | grep cri-o
cri-o-1.11.10-1.rhaos3.11.git42c86f0.el7.x86_64.rpm
@vanloswang vanloswang changed the title crictl images can not find local image loaded bt podman crictl images can not find local image loaded by podman Jun 19, 2019
@feiskyer
Copy link
Member

Seems this is a cri-o issue? @runcom @mrunalp Could you help to take a look?

@vrothberg
Copy link
Contributor

@vanloswang, I suspect that cri-o and podman are configured to use different directories for storing images. You can check that in the corresponding configs (i.e., /etc/crio/crio.conf and /etc/containers/storage.conf). It's also important to execute podman as root since rootless images are stored somewhere else.

@vanloswang
Copy link
Author

vanloswang commented Jun 20, 2019

@vrothberg I checked it, The images are loaded by podman and crictl can find them, so they use the same directory for storing images. The configs driver, runroot and graphroot in /etc/crio/crio.conf and /etc/containers/storage.conf are use the default value.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 18, 2019
@feiskyer feiskyer removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 25, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 24, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 23, 2020
@drpaneas
Copy link

I have the same problem at the latest CRC environment:

crc version
crc version: 1.6.0+8ef676f
OpenShift version: 4.3.0 (embedded in binary)
NAME="Red Hat Enterprise Linux CoreOS"
VERSION="43.81.202001142154.0"
VERSION_ID="4.3"
OPENSHIFT_VERSION="4.3"
RHEL_VERSION=8.0
PRETTY_NAME="Red Hat Enterprise Linux CoreOS 43.81.202001142154.0 (Ootpa)"
ID="rhcos"
ID_LIKE="rhel fedora"
ANSI_COLOR="0;31"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="OpenShift Container Platform"
REDHAT_BUGZILLA_PRODUCT_VERSION="4.3"
REDHAT_SUPPORT_PRODUCT="OpenShift Container Platform"
REDHAT_SUPPORT_PRODUCT_VERSION="4.3"
OSTREE_VERSION='43.81.202001142154.0'

what I did is:

  • docker save $image-name > image-name.tar # from my local machine
  • scp image-name.tar core@crc ip: # send the image to the CRC VM
  • sshcrc # ssh into the CRC VM
  • cat image-name.tar | podman load # load it using podman
    Unfortunately crictl images do not see that image

@saschagrunert
Copy link
Member

saschagrunert commented Feb 21, 2020

This looks like a configuration issue to me, please verify that:

  1. CRI-O and podman use the same storage driver in:
    • /etc/crio/crio.conf: option stroage_driver
    • /etc/containers/storage.conf: option driver
  2. /var/lib/containers/storage/ contains only one kind of [driver]-{containers,images,layers} directories (the selected above, or overlay as global fallback)

@drpaneas
Copy link

drpaneas commented Feb 21, 2020

  1. Is CRI-O and podman use the same storage driver?

    • /etc/crio/crio.conf: it is commented out: #storage_driver = "overlay"
    • /etc/containers/storage.conf: it says driver = "overlay"
  2. /var/lib/containers/storage/ contains only one kind of [driver]-{containers,images,layers} directories (the selected above, or overlay as global fallback)

$ podman images
REPOSITORY                    TAG     IMAGE ID       CREATED        SIZE
localhost/visitors-operator   1.2.3   20d5071e2c89   24 hours ago   150 MB

$ sudo ls -l /var/lib/containers/storage/overlay | grep 20d5071e2c89

still doesn't find it.

@saschagrunert
Copy link
Member

still doesn't find it.

Hm, the image should be referenced by its ID in /var/lib/containers/storage/overlay-images. Something like:

> sudo jq . /var/lib/containers/storage/overlay-images/images.json

should show up the image as well.

What is the output of?

> sudo podman inspect localhost/visitors-operator | jq '.[].GraphDriver'

@drpaneas
Copy link

drpaneas commented Feb 21, 2020

podman inspect localhost/visitors-operator:1.2.3 | jq '.[].GraphDriver'
{
  "Name": "overlay",
  "Data": {
    "LowerDir": "/var/home/core/.local/share/containers/storage/overlay/86b228726a309dc9a9f27d00974b63fac95cc29e960b8c2ea9e8d6750541fcf7/diff:/var/home/core/.local/share/containers/storage/overlay/5e8f5812933896ec06f9f14a68bc8e3c7b14910784718601d8487221f0c62883/diff:/var/home/core/.local/share/containers/storage/overlay/87e0124d5f516dfd4f96cb5e688a30def692523669f887ff889bb602aea552cd/diff:/var/home/core/.local/share/containers/storage/overlay/b6645727229cf3a8a5b06e339e1c58c9cea1f0e3c633f2532e1a3817d0a9f886/diff",
    "UpperDir": "/var/home/core/.local/share/containers/storage/overlay/d667d542c467137903a38dca364be7fe896ab630b1ed8866d326ca29a871a4e8/diff",
    "WorkDir": "/var/home/core/.local/share/containers/storage/overlay/d667d542c467137903a38dca364be7fe896ab630b1ed8866d326ca29a871a4e8/work"
  }
}

it seems that podman loaded the image at $HOME/.local/share/containers/storage/overlay/

@saschagrunert
Copy link
Member

saschagrunert commented Feb 21, 2020

it seems that podman loaded the image at $HOME/.local/share/containers/storage/overlay/

Yep you have to run podman as root, otherwise it will choose the local storage :)

@drpaneas
Copy link

Indeed, it now works:

$ cat image.tar | sudo podman load
$ sudo crictl images | grep visitor
localhost/visitors-operator         1.2.3               20d5071e2c89d       150MB

@saschagrunert
Copy link
Member

saschagrunert commented Feb 21, 2020

Awesome! 👍

And here you get "Sascha's tip of the day": You can load the image via

sudo podman -i image.tar

too! 😇

@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

7 participants