Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Installing in a airgapped environment fails #4

Closed
opuk opened this issue Aug 27, 2021 · 11 comments
Closed

Installing in a airgapped environment fails #4

opuk opened this issue Aug 27, 2021 · 11 comments

Comments

@opuk
Copy link

opuk commented Aug 27, 2021

Using the offline bundle I get the following:

# ./openshift-mirror-registry install -v

   __   __
  /  \ /  \     ______   _    _     __   __   __
 / /\ / /\ \   /  __  \ | |  | |   /  \  \ \ / /
/ /  / /  \ \  | |  | | | |  | |  / /\ \  \   /
\ \  \ \  / /  | |__| | | |__| | / ____ \  | |
 \ \/ \ \/ /   \_  ___/  \____/ /_/    \_\ |_|
  \__/ \__/      \ \__
                  \___\ by Red Hat
 Build, Store, and Distribute your Containers
	
INFO[2021-08-27 11:36:08] Install has begun
DEBU[2021-08-27 11:36:08] Quay Image: quay.io/projectquay/quay
DEBU[2021-08-27 11:36:08] Redis Image: docker.io/centos/redis-5-centos8
DEBU[2021-08-27 11:36:08] Postgres Image: docker.io/centos/postgresql-10-centos8
INFO[2021-08-27 11:36:08] Found execution environment at /root/execution-environment.tar
INFO[2021-08-27 11:36:08] Loading execution environment from execution-environment.tar
Getting image source signatures
Copying blob 37c450a9b5fe done
Copying blob e7ed17121dee done
Copying blob 785573c4b945 done
Copying blob 4a155c756b32 done
Copying blob 0529b109eba6 done
Copying blob 2943856be9f0 done
Copying blob f6ddb4985789 done
Copying blob 108e9849a443 done
Copying config 307becc1c1 done
Writing manifest to image destination
Storing signatures
Loaded image(s): quay.io/quay/openshift-mirror-registry-ee:latest
INFO[2021-08-27 11:36:20] Detected an installation to localhost
INFO[2021-08-27 11:36:20] Did not find SSH key in default location. Attempting to set up SSH keys.
INFO[2021-08-27 11:36:20] Generating SSH Key
Generating public/private rsa key pair.
Your identification has been saved in /root/.ssh/quay_installer.
Your public key has been saved in /root/.ssh/quay_installer.pub.
The key fingerprint is:
SHA256:zM+84+cyYvaD4d5tEa65Gmd/1jjQ2Ho/gVeQdW4WBBw root@registry.lab.example.com
The key's randomart image is:
+---[RSA 2048]----+
|            .E+=o|
|             .o.o|
|               .+|
|       o    .  o.|
|        S  . =. .|
|        .+  =.oo |
|       ..o*o +.o.|
|        *=B++ *..|
|       +o=*X++ oo|
+----[SHA256]-----+
INFO[2021-08-27 11:36:21] Generated SSH Key at /root/.ssh/quay_installer
INFO[2021-08-27 11:36:21] Adding key to ~/.ssh/authorized_keys
INFO[2021-08-27 11:36:21] Successfully set up SSH keys
INFO[2021-08-27 11:36:21] Attempting to set SELinux rules on SSH key
INFO[2021-08-27 11:36:21] Found image archive at /root/image-archive.tar
INFO[2021-08-27 11:36:21] Detected an installation to localhost
INFO[2021-08-27 11:36:21] Loading image archive from /root/image-archive.tar
Getting image source signatures
Copying blob 51a3665ff30d done
Copying blob 4db57a76379a done
Copying blob 1dc5d886adc6 done
Copying blob e39def806cc7 done
Copying blob f4eb2b3bdf14 done
Copying blob 2653d992f4ef done
Copying blob cf9ac74f2342 done
Copying blob 1889700ed681 done
Copying blob ef335deb2166 done
Copying blob 8bca2794c502 done
Copying blob 4cead5996cd0 done
Copying blob 19dfa32d6e4e done
Copying blob cfb1fe7cf90a done
Copying blob f47aa5657cc3 done
Copying blob 06efc9e8779f done
Copying blob b0d82b45d008 done
Copying config aae0b0064f done
Writing manifest to image destination
Storing signatures
Getting image source signatures
Copying blob e891488bb49a done
Copying blob 57bf7e374e77 done
Copying blob 78e7b9975eaf done
Copying blob bb1365b5e9cc done
Copying blob 291f6e44771a done
Copying blob b739f8fb0fdd done
Copying blob 9a9193d4eb7b done
Copying config e01505c5be done
Writing manifest to image destination
Storing signatures
Getting image source signatures
Copying blob 3c8979d6b307 done
Copying blob b739f8fb0fdd skipped: already exists
Copying blob 291f6e44771a skipped: already exists
Copying blob e252411ca8d5 done
Copying blob 57bf7e374e77 skipped: already exists
Copying blob 78e7b9975eaf skipped: already exists
Copying blob ff0c00af3591 done
Copying blob c963a8ac8772 done
Copying blob fad7435a9b56 done
Copying config 4c5c4bf0fa done
Writing manifest to image destination
Storing signatures
Loaded image(s): quay.io/projectquay/quay:latest,docker.io/centos/redis-5-centos8:latest,docker.io/centos/postgresql-10-centos8:latest
INFO[2021-08-27 11:38:27] Attempting to set SELinux rules on image archive
INFO[2021-08-27 11:38:27] Running install playbook. This may take some time. To see playbook output run the installer with -v (verbose) flag.
INFO[2021-08-27 11:38:27] Detected an installation to localhost
DEBU[2021-08-27 11:38:27] Running command: sudo podman run --rm --interactive --tty --workdir /runner/project --net host -v /root/image-archive.tar:/runner/image-archive.tar -v /root/.ssh/quay_installer:/runner/env/ssh_key -e RUNNER_OMIT_EVENTS=False -e RUNNER_ONLY_FAILED_EVENTS=False -e ANSIBLE_HOST_KEY_CHECKING=False -e ANSIBLE_CONFIG=/runner/project/ansible.cfg --quiet --name ansible_runner_instance quay.io/quay/openshift-mirror-registry-ee ansible-playbook -i root@localhost, --private-key /runner/env/ssh_key -e "init_password=nPTW9uZpb50hCzYU138A247VlM6xcNEy quay_image=quay.io/projectquay/quay redis_image=docker.io/centos/redis-5-centos8 postgres_image=docker.io/centos/postgresql-10-centos8 quay_hostname=localhost:8443 local_install=true" install_mirror_appliance.yml

PLAY [Install Mirror Appliance] *************************************************************************************************************************************************************************************

TASK [Gathering Facts] **********************************************************************************************************************************************************************************************
ok: [root@localhost]

TASK [mirror_appliance : Install Dependencies] **********************************************************************************************************************************************************************
included: /runner/project/roles/mirror_appliance/tasks/install-deps.yaml for root@localhost

TASK [mirror_appliance : Installing Podman] *************************************************************************************************************************************************************************
ok: [root@localhost]

TASK [mirror_appliance : Add IP address of all hosts to all hosts] **************************************************************************************************************************************************
changed: [root@localhost]

TASK [mirror_appliance : Set SELinux Rules] *************************************************************************************************************************************************************************
included: /runner/project/roles/mirror_appliance/tasks/set-selinux-rules.yaml for root@localhost

TASK [mirror_appliance : Set container_manage_cgroup flag on and keep it persistent across reboots] *****************************************************************************************************************
skipping: [root@localhost]

TASK [mirror_appliance : Create Podman Pod] *************************************************************************************************************************************************************************
included: /runner/project/roles/mirror_appliance/tasks/create-podman-pod.yaml for root@localhost

TASK [mirror_appliance : Starting Pod with ports 80 and 443 exposed] ************************************************************************************************************************************************
fatal: [root@localhost]: FAILED! => {"changed": false, "msg": "Can't create pod quay-pod", "stderr": "time=\"2021-08-27T11:39:00+02:00\" level=warning msg=\"failed, retrying in 1s ... (1/3). Error: Error initializing source docker://registry.access.redhat.com/ubi8/pause:latest: error pinging docker registry registry.access.redhat.com: Get \\\"https://registry.access.redhat.com/v2/\\\": dial tcp 95.100.153.107:443: connect: connection refused\"\ntime=\"2021-08-27T11:39:02+02:00\" level=warning msg=\"failed, retrying in 1s ... (2/3). Error: Error initializing source docker://registry.access.redhat.com/ubi8/pause:latest: error pinging docker registry registry.access.redhat.com: Get \\\"https://registry.access.redhat.com/v2/\\\": dial tcp 95.100.153.107:443: connect: connection refused\"\ntime=\"2021-08-27T11:39:07+02:00\" level=warning msg=\"failed, retrying in 1s ... (3/3). Error: Error initializing source docker://registry.access.redhat.com/ubi8/pause:latest: error pinging docker registry registry.access.redhat.com: Get \\\"https://registry.access.redhat.com/v2/\\\": dial tcp 95.100.153.107:443: connect: connection refused\"\ntime=\"2021-08-27T11:39:11+02:00\" level=error msg=\"Error freeing pod lock after failed creation: no such file or directory\"\nError: error adding Infra Container: error pulling infra-container image: Error initializing source docker://registry.access.redhat.com/ubi8/pause:latest: error pinging docker registry registry.access.redhat.com: Get \"https://registry.access.redhat.com/v2/\": dial tcp 95.100.153.107:443: connect: connection refused\n", "stderr_lines": ["time=\"2021-08-27T11:39:00+02:00\" level=warning msg=\"failed, retrying in 1s ... (1/3). Error: Error initializing source docker://registry.access.redhat.com/ubi8/pause:latest: error pinging docker registry registry.access.redhat.com: Get \\\"https://registry.access.redhat.com/v2/\\\": dial tcp 95.100.153.107:443: connect: connection refused\"", "time=\"2021-08-27T11:39:02+02:00\" level=warning msg=\"failed, retrying in 1s ... (2/3). Error: Error initializing source docker://registry.access.redhat.com/ubi8/pause:latest: error pinging docker registry registry.access.redhat.com: Get \\\"https://registry.access.redhat.com/v2/\\\": dial tcp 95.100.153.107:443: connect: connection refused\"", "time=\"2021-08-27T11:39:07+02:00\" level=warning msg=\"failed, retrying in 1s ... (3/3). Error: Error initializing source docker://registry.access.redhat.com/ubi8/pause:latest: error pinging docker registry registry.access.redhat.com: Get \\\"https://registry.access.redhat.com/v2/\\\": dial tcp 95.100.153.107:443: connect: connection refused\"", "time=\"2021-08-27T11:39:11+02:00\" level=error msg=\"Error freeing pod lock after failed creation: no such file or directory\"", "Error: error adding Infra Container: error pulling infra-container image: Error initializing source docker://registry.access.redhat.com/ubi8/pause:latest: error pinging docker registry registry.access.redhat.com: Get \"https://registry.access.redhat.com/v2/\": dial tcp 95.100.153.107:443: connect: connection refused"], "stdout": "", "stdout_lines": []}

PLAY RECAP **********************************************************************************************************************************************************************************************************
root@localhost             : ok=6    changed=1    unreachable=0    failed=1    skipped=1    rescued=0    ignored=0

@tomazb
Copy link

tomazb commented Oct 26, 2021

Me too.

I used the latest offline installer - v0.1.4. And after seeing this issue I decided to try again without any network access and it did not work. In the same step!

See picture https://imgur.com/LdRglha.png

@theodor2311
Copy link

@opuk @tomazb I have encountered the same issue, this is because the registry.access.redhat.com/ubi8/pause:latest is missing from the image-archive.tar bundle. The workaround is to download the pause image to your offline environment before running the installer, for example:

podman save registry.access.redhat.com/ubi8/pause:latest > pause.tar
# Copy the pause.tar to your offline environment and run the following command to load the image
podman load -i pause.tar

I just created a PR and hopefully will fix this issue for later releases.

@HammerMeetNail
Copy link
Contributor

@opuk @tomazb @theodor2311 Can you provide some more information about the host the cli is being run on? The pause image shouldn't be needed. For CI we spin up a new RHEL8 host, drop the tarball on it, and then trigger install. See here for a recent run: https://github.com/quay/openshift-mirror-registry/runs/4488715365?check_suite_focus=true

@opuk
Copy link
Author

opuk commented Dec 11, 2021

It was quite some time ago since I tried it but I'm 99% sure it was on a rhel8 box registered to a Satellite.

@theodor2311
Copy link

@HammerMeetNail I am using RHEL8.4 and I believe the pause image is used at the podman pod creation.

@opuk
Copy link
Author

opuk commented Dec 11, 2021

The pause image is indeed used by podman pods.

@HammerMeetNail
Copy link
Contributor

@opuk @tomazb @theodor2311 Can everyone confirm which version of podman is being used? We were under the impression that the pause image comes included with podman 3. If it is included in version 3, would there be any objections to updating podman instead of adding the image to the offline archive?

If you're finding it's not included with podman 3, we can definitely include it in the archive.

@theodor2311
Copy link

@HammerMeetNail
I can confirm that RHEL8.4 with podman version 3.0.2-dev does not come with the pause image.

@HammerMeetNail
Copy link
Contributor

Thanks, we'll bundle the pause image into the archive.

@HammerMeetNail
Copy link
Contributor

We've pulled in a handful of changes in prep of 1.0 release. The pause image is now included in the master branch. Should have a new bundle out next week that includes everything. If you want to get a head start, feel free to build from source. Note that you will need to log into registry.access.redhat.com in order to pull images.

@HammerMeetNail
Copy link
Contributor

Here's the release, https://github.com/quay/openshift-mirror-registry/releases/tag/1.0.0-RC1

Expecting this to include everything in the 1.0 release. Most outstanding PRs have been merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

4 participants