Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Building multi-arch with podman appears to result in a pushed image with only one arch #13676

Closed
nycnewman opened this issue Mar 26, 2022 · 11 comments
Labels
locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. macos MacOS (OSX) related

Comments

@nycnewman
Copy link

Description
Attempting to build multi-architecture images on MacOS using Podman appears to be broken and resulting uploaded image has only one architecture. It also appears that images are tagged as localhost/ or no tag at all.

podman installed on M1 Apple with Monterey 12.3. Also installed qemu-user-static into podman machine to enable multiarch execution.

The process appears to run through both amd64 and arm64/v8 builds and produces untagged images. The resulting tagged image appears to show localhost/tag:version rather than just tag:version (as seen in Docker equivalent) and image is tagged withonly one of the built architectures. podman push appears to only one of the images version (possibly only first in CLI list.

Steps to reproduce the issue:

  1. Run one of 👍

podman buildx build --platform linux/arm64/v8,linux/amd64 --manifest .
podman build --platform linux/arm64/v8 --platform linux/amd64 --manifest .

  1. Results in the following images:
    localhost/ 0.3 fed4c62a0ec4 7 minutes ago 1.09 kB
    b691ec9a6b83 7 minutes ago 384 MB
    64326be9ca34 9 minutes ago 434 MB
    4aaa75398b62 12 minutes ago 370 MB
    37aff2989fd9 13 minutes ago 405 MB

  2. Running inspect of the final image shows only one Architecture and same when push'ed to Docker Hub
    podman inspect a6775d31d3f4 | jq '.[0].Architecture'
    "arm64"

The untagged image has:
podman inspect b691ec9a6b83 | jq '.[0].Architecture'
"amd64"

Describe the results you received:
podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
0.3 a6775d31d3f4 4 minutes ago 1.09 kB
c077690eddbe 4 minutes ago 370 MB
92355055a893 5 minutes ago 405 MB
91103aacc8fd 6 minutes ago 384 MB
5f3ab1c52800 8 minutes ago 434 MB
docker.io/library/python 3.9.11-slim-buster a5ac6948c534 9 days ago 115 MB
docker.io/kindest/node f9d67d6e8342 2 weeks ago 1.22 GB

Describe the results you expected:
Image in repo with multiple architectures tagged along with associated image layers

Output of buildah version:

podman 4.0.2 on MacOS. Don't think this is a separate buildah package

Output of podman version if reporting a podman build issue:

podman version
Client:       Podman Engine
Version:      4.0.2
API Version:  4.0.2
Go Version:   go1.17.8

Built:      Wed Mar  2 09:04:36 2022
OS/Arch:    darwin/arm64

Server:       Podman Engine
Version:      4.0.2
API Version:  4.0.2
Go Version:   go1.16.14

Built:      Thu Mar  3 09:58:50 2022
OS/Arch:    linux/arm64

Output of cat /etc/*release:

NAME="Fedora Linux"
VERSION="35.20220305.dev.0 (CoreOS)"
ID=fedora
VERSION_ID=35
VERSION_CODENAME=""
PLATFORM_ID="platform:f35"
PRETTY_NAME="Fedora CoreOS 35.20220305.dev.0"
ANSI_COLOR="0;38;2;60;110;180"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:35"
HOME_URL="https://getfedora.org/coreos/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora-coreos/"
SUPPORT_URL="https://github.com/coreos/fedora-coreos-tracker/"
BUG_REPORT_URL="https://github.com/coreos/fedora-coreos-tracker/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=35
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=35
PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy"
VARIANT="CoreOS"
VARIANT_ID=coreos
OSTREE_VERSION='35.20220305.dev.0'
DEFAULT_HOSTNAME=localhost
Fedora release 35 (Thirty Five)
Fedora release 35 (Thirty Five)

Output of uname -a:

Linux localhost.localdomain 5.15.18-200.fc35.aarch64 containers/buildah#1 SMP Sat Jan 29 12:44:33 UTC 2022 aarch64 aarch64 aarch64 GNU/Linux

Output of cat /etc/containers/storage.conf:

# This file is is the configuration file for all tools
# that use the containers/storage library. The storage.conf file
# overrides all other storage.conf files. Container engines using the
# container/storage library do not inherit fields from other storage.conf
# files.
#
#  Note: The storage.conf file overrides other storage.conf files based on this precedence:
#      /usr/containers/storage.conf
#      /etc/containers/storage.conf
#      $HOME/.config/containers/storage.conf
#      $XDG_CONFIG_HOME/containers/storage.conf (If XDG_CONFIG_HOME is set)
# See man 5 containers-storage.conf for more information
# The "container storage" table contains all of the server options.
[storage]

# Default Storage Driver, Must be set for proper operation.
driver = "overlay"

# Temporary storage location
runroot = "/run/containers/storage"

# Primary Read/Write location of container storage
# When changing the graphroot location on an SELINUX system, you must
# ensure  the labeling matches the default locations labels with the
# following commands:
# semanage fcontext -a -e /var/lib/containers/storage /NEWSTORAGEPATH
# restorecon -R -v /NEWSTORAGEPATH
graphroot = "/var/lib/containers/storage"


# Storage path for rootless users
#
# rootless_storage_path = "$HOME/.local/share/containers/storage"

[storage.options]
# Storage options to be passed to underlying storage drivers

# AdditionalImageStores is used to pass paths to additional Read/Only image stores
# Must be comma separated list.
additionalimagestores = [
]

# Remap-UIDs/GIDs is the mapping from UIDs/GIDs as they should appear inside of
# a container, to the UIDs/GIDs as they should appear outside of the container,
# and the length of the range of UIDs/GIDs.  Additional mapped sets can be
# listed and will be heeded by libraries, but there are limits to the number of
# mappings which the kernel will allow when you later attempt to run a
# container.
#
# remap-uids = 0:1668442479:65536
# remap-gids = 0:1668442479:65536

# Remap-User/Group is a user name which can be used to look up one or more UID/GID
# ranges in the /etc/subuid or /etc/subgid file.  Mappings are set up starting
# with an in-container ID of 0 and then a host-level ID taken from the lowest
# range that matches the specified name, and using the length of that range.
# Additional ranges are then assigned, using the ranges which specify the
# lowest host-level IDs first, to the lowest not-yet-mapped in-container ID,
# until all of the entries have been used for maps.
#
# remap-user = "containers"
# remap-group = "containers"

# Root-auto-userns-user is a user name which can be used to look up one or more UID/GID
# ranges in the /etc/subuid and /etc/subgid file.  These ranges will be partitioned
# to containers configured to create automatically a user namespace.  Containers
# configured to automatically create a user namespace can still overlap with containers
# having an explicit mapping set.
# This setting is ignored when running as rootless.
# root-auto-userns-user = "storage"
#
# Auto-userns-min-size is the minimum size for a user namespace created automatically.
# auto-userns-min-size=1024
#
# Auto-userns-max-size is the minimum size for a user namespace created automatically.
# auto-userns-max-size=65536

[storage.options.overlay]
# ignore_chown_errors can be set to allow a non privileged user running with
# a single UID within a user namespace to run containers. The user can pull
# and use any image even those with multiple uids.  Note multiple UIDs will be
# squashed down to the default uid in the container.  These images will have no
# separation between the users in the container. Only supported for the overlay
# and vfs drivers.
#ignore_chown_errors = "false"

# Inodes is used to set a maximum inodes of the container image.
# inodes = ""

# Path to an helper program to use for mounting the file system instead of mounting it
# directly.
#mount_program = "/usr/bin/fuse-overlayfs"

# mountopt specifies comma separated list of extra mount options
mountopt = "nodev,metacopy=on"

# Set to skip a PRIVATE bind mount on the storage home directory.
# skip_mount_home = "false"

# Size is used to set a maximum size of the container image.
# size = ""

# ForceMask specifies the permissions mask that is used for new files and
# directories.
#
# The values "shared" and "private" are accepted.
# Octal permission masks are also accepted.
#
#  "": No value specified.
#     All files/directories, get set with the permissions identified within the
#     image.
#  "private": it is equivalent to 0700.
#     All files/directories get set with 0700 permissions.  The owner has rwx
#     access to the files. No other users on the system can access the files.
#     This setting could be used with networked based homedirs.
#  "shared": it is equivalent to 0755.
#     The owner has rwx access to the files and everyone else can read, access
#     and execute them. This setting is useful for sharing containers storage
#     with other users.  For instance have a storage owned by root but shared
#     to rootless users as an additional store.
#     NOTE:  All files within the image are made readable and executable by any
#     user on the system. Even /etc/shadow within your image is now readable by
#     any user.
#
#   OCTAL: Users can experiment with other OCTAL Permissions.
#
#  Note: The force_mask Flag is an experimental feature, it could change in the
#  future.  When "force_mask" is set the original permission mask is stored in
#  the "user.containers.override_stat" xattr and the "mount_program" option must
#  be specified. Mount programs like "/usr/bin/fuse-overlayfs" present the
#  extended attribute permissions to processes within containers rather then the
#  "force_mask"  permissions.
#
# force_mask = ""

[storage.options.thinpool]
# Storage Options for thinpool

# autoextend_percent determines the amount by which pool needs to be
# grown. This is specified in terms of % of pool size. So a value of 20 means
# that when threshold is hit, pool will be grown by 20% of existing
# pool size.
# autoextend_percent = "20"

# autoextend_threshold determines the pool extension threshold in terms
# of percentage of pool size. For example, if threshold is 60, that means when
# pool is 60% full, threshold has been hit.
# autoextend_threshold = "80"

# basesize specifies the size to use when creating the base device, which
# limits the size of images and containers.
# basesize = "10G"

# blocksize specifies a custom blocksize to use for the thin pool.
# blocksize="64k"

# directlvm_device specifies a custom block storage device to use for the
# thin pool. Required if you setup devicemapper.
# directlvm_device = ""

# directlvm_device_force wipes device even if device already has a filesystem.
# directlvm_device_force = "True"

# fs specifies the filesystem type to use for the base device.
# fs="xfs"

# log_level sets the log level of devicemapper.
# 0: LogLevelSuppress 0 (Default)
# 2: LogLevelFatal
# 3: LogLevelErr
# 4: LogLevelWarn
# 5: LogLevelNotice
# 6: LogLevelInfo
# 7: LogLevelDebug
# log_level = "7"

# min_free_space specifies the min free space percent in a thin pool require for
# new device creation to succeed. Valid values are from 0% - 99%.
# Value 0% disables
# min_free_space = "10%"

# mkfsarg specifies extra mkfs arguments to be used when creating the base
# device.
# mkfsarg = ""

# metadata_size is used to set the `pvcreate --metadatasize` options when
# creating thin devices. Default is 128k
# metadata_size = ""

# Size is used to set a maximum size of the container image.
# size = ""

# use_deferred_removal marks devicemapper block device for deferred removal.
# If the thinpool is in use when the driver attempts to remove it, the driver
# tells the kernel to remove it as soon as possible. Note this does not free
# up the disk space, use deferred deletion to fully remove the thinpool.
# use_deferred_removal = "True"

# use_deferred_deletion marks thinpool device for deferred deletion.
# If the device is busy when the driver attempts to delete it, the driver
# will attempt to delete device every 30 seconds until successful.
# If the program using the driver exits, the driver will continue attempting
# to cleanup the next time the driver is used. Deferred deletion permanently
# deletes the device and all data stored in device will be lost.
# use_deferred_deletion = "True"

# xfs_nospace_max_retries specifies the maximum number of retries XFS should
# attempt to complete IO when ENOSPC (no space) error is returned by
# underlying storage device.
# xfs_nospace_max_retries = "0"
@flouthoc
Copy link
Collaborator

Hi @nycnewman , Thanks for creating the issue.

I think podman build --platform linux/arm64/v8 --platform linux/amd64 --manifest . is not a valid command. It should fail with Error: no context directory and no Containerfile specified

Could you try this podman build --platform linux/arm64/v8 --platform linux/amd64 --manifest somename .
and then podman manifest inspect somename, it shows manifest for both the arch.

Then try podman manifest push somename

Overall this seems to be an issue with podman-remote and podman-machine setup so i'm moving that there and I'm unable to reproduce with latest version on podman-machine.

@flouthoc flouthoc transferred this issue from containers/buildah Mar 28, 2022
@github-actions github-actions bot added the macos MacOS (OSX) related label Mar 28, 2022
@flouthoc
Copy link
Collaborator

@nycnewman I am closing this issue since you need to push manifest and works fine with podman-remote and podman-machine setup but please feel free to reopen if you are still facing issue with macOS as I did not try on macOS but it should work.

@ashley-cui @baude Could you try this on macOS please.

@nycnewman
Copy link
Author

nycnewman commented Mar 29, 2022

Looks like Github swallowed some of the command lines. I tried the following:

podman build --log-level debug --platform linux/arm64/v8 --platform linux/amd64 --manifest docker.io/nycnewman/web-frontend:0.3 .

podman manifest push docker.io/nycnewman/web-frontend:0.3 docker://docker.io/nycnewman/web-frontend:0.3

podman push docker.io/nycnewman/web-frontend:0.3 docker://docker.io/nycnewman/web-frontend:0.3

As far as I can see this produced:

podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/nycnewman/web-frontend 0.3 113c1594fed2 About an hour ago 1.09 kB
903cbe724021 About an hour ago 370 MB
8826facc79da About an hour ago 405 MB
2a7c0c803197 About an hour ago 384 MB
2f942caf3b6d About an hour ago 434 MB
docker.io/neuvector/scanner.preview latest 7b0109617b9e 39 hours ago 163 MB
docker.io/neuvector/updater.preview latest 8164251abc35 5 days ago 12.4 MB
docker.io/library/python 3.9.11-slim-buster a5ac6948c534 11 days ago 115 MB

which suggests four untagged images (assumption being that since Dockerfile has a two step then this is one image per step for two platforms). The manifect push returned almost immediately with no upload to Docker Hub and the image push only pushed the linux/arm64/v8 image.

@flouthoc I don't have rights to reopen.

@github-actions
Copy link

github-actions bot commented May 8, 2022

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented May 9, 2022

@flouthoc @nycnewman Do we have a repeater of this without github being involved.

@rhatdan
Copy link
Member

rhatdan commented May 9, 2022

@flouthoc could you try a manifest build and push using podman-remote and see if it has issues?

@github-actions
Copy link

github-actions bot commented Jun 9, 2022

A friendly reminder that this issue had no activity for 30 days.

@flouthoc
Copy link
Collaborator

flouthoc commented Jun 9, 2022

Will check this today.

@flouthoc
Copy link
Collaborator

flouthoc commented Jun 9, 2022

Hi @nycnewman ,

I tried replaying this again. In example above #13676 (comment) your second push override the first manifest push.

You can see my steps below

  1. build --platform linux/arm64 --platform linux/amd64 with --manifest
 sudo podman-remote build --log-level debug --platform linux/arm64/v8 --platform linux/amd64 --manifest quay.io/myuser/test .
INFO[0000] podman-remote filtering at log level debug   
DEBU[0000] Called build.PersistentPreRunE(podman-remote build --log-level debug --platform linux/arm64/v8 --platform linux/amd64 --manifest quay.io/myuser/test .) 
DEBU[0000] DoRequest Method: GET URI: http://d/v4.2.0/libpod/_ping 
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf" 
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/000-shortnames.conf" 
DEBU[0000] Found credentials for quay.io in credential helper containers-auth.json in file /run/containers/0/auth.json 
DEBU[0000] DoRequest Method: POST URI: http://d/v4.2.0/libpod/build 
[linux/amd64] STEP 1/2: FROM alpine
Resolved "alpine" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull docker.io/library/alpine:latest...
Getting image source signatures
Copying blob sha256:2408cc74d12b6cd092bb8b516ba7d5e290f485d3eb9672efc00f0583730179e8
Copying blob sha256:2408cc74d12b6cd092bb8b516ba7d5e290f485d3eb9672efc00f0583730179e8
Copying config sha256:e66264b98777e12192600bf9b4d663655c98a090072e1bab49e233d7531d1294
Writing manifest to image destination
Storing signatures
[linux/amd64] STEP 2/2: COPY hello .
[linux/amd64] COMMIT
--> b7c96a30183
[linux/arm64] STEP 1/2: FROM alpine
[Warning] one or more build args were not consumed: [TARGETARCH TARGETOS TARGETPLATFORM]
b7c96a3018383649dc1282e45d5f96413ff7479d6e5bfe58b71c5d9291c84909
Resolved "alpine" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull docker.io/library/alpine:latest...
Getting image source signatures
Copying blob sha256:b3c136eddcbf2003d3180787cef00f39d46b9fd9e4623178282ad6a8d63ad3b0
Copying blob sha256:b3c136eddcbf2003d3180787cef00f39d46b9fd9e4623178282ad6a8d63ad3b0
Copying config sha256:6e30ab57aeeef1ebca8ac5a6ea05b5dd39d54990be94e7be18bb969a02d10a3f
Writing manifest to image destination
Storing signatures
[linux/arm64] STEP 2/2: COPY hello .
[linux/arm64] COMMIT
--> 86093f1cdc4
[Warning] one or more build args were not consumed: [TARGETARCH TARGETOS TARGETPLATFORM]
86093f1cdc47c066aa70c832aa344172ddbc48d62f7b4a6a4220af68f96f9e6f
DEBU[0021] Called build.PersistentPostRunE(podman-remote build --log-level debug --platform linux/arm64/v8 --platform linux/amd64 --manifest quay.io/myuser/test .) 
  1. sudo podman-remote manifest inspect quay.io/myuser/test should show both images
sudo podman manifest inspect quay.io/myuser/test
{
    "schemaVersion": 2,
    "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
    "manifests": [
        {
            "mediaType": "application/vnd.oci.image.manifest.v1+json",
            "size": 750,
            "digest": "sha256:253ce960c995eae163a718a37f28ae7fcbc4621e0fa8b8e3ea31b9f87f679dc6",
            "platform": {
                "architecture": "amd64",
                "os": "linux"
            }
        },
        {
            "mediaType": "application/vnd.oci.image.manifest.v1+json",
            "size": 750,
            "digest": "sha256:2aa9c83e071de9b67f67be1fa07b9e6553516adf1f91f52f8ec6587f1af6bc5d",
            "platform": {
                "architecture": "arm64",
                "os": "linux",
                "variant": "v8"
            }
        }
    ]
}
  1. Check with sudo podman-remote images
sudo podman-remote images

REPOSITORY                TAG         IMAGE ID      CREATED         SIZE
quay.io/myuser/test       latest      b4d48dc64828  13 minutes ago  1.09 kB   
<none>                    <none>      86093f1cdc47  13 minutes ago  5.56 MB    <---- actual arm64 image
<none>                    <none>      b7c96a301838  13 minutes ago  5.82 MB   <--- actual amd64 image
  1. Perform push sudo podman-remote manifest push quay.io/myuser/test quay.io/myuser/test and check on registry.
    quay

Note: I think one issue is that you are mixing podman image and podman manifest command. podman images show built images without any name i.e <none> and only manifest will be tagged. Please use manifest push and manifest inspect.

Could you please update your podman client and podman-machine and try the steps above since this works with podman-remote i see no reason for this not to be working with macOS and please take care of the note above.

But if you still think i missed something or got something wrong please comment below and we can reopen the issue :)

Cheers

@flouthoc flouthoc closed this as completed Jun 9, 2022
@nycnewman
Copy link
Author

@flouthoc Everything works as you describe except the last step

Nothing gets to docket.com

sudo podman-remote --log-level debug manifest push docker.com/nycnewman/web-frontend:0.4 docker.com/nycnewman/web-frontend:0.4
INFO[0000] podman-remote filtering at log level debug
DEBU[0000] Called push.PersistentPreRunE(podman-remote --log-level debug manifest push docker.com/nycnewman/web-frontend:0.4 docker.com/nycnewman/web-frontend:0.4)
DEBU[0000] SSH Ident Key "/Users/edwardnewman/.ssh/podman-machine-default" SHA256:BSrlWKbezpDjEGankd+Dv0xjISCgPBLFugZup5AIfbU ssh-ed25519
DEBU[0000] Found SSH_AUTH_SOCK "/private/tmp/com.apple.launchd.0hgtBR4HKa/Listeners", ssh-agent signer(s) enabled
DEBU[0000] DoRequest Method: GET URI: http://d/v4.1.0/libpod/_ping
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf"
DEBU[0000] No credentials matching gcr.io found in /Users/edwardnewman/.config/containers/auth.json
DEBU[0000] No credentials matching gcr.io found in /Users/edwardnewman/.config/containers/auth.json
DEBU[0000] Looking up in credential helper gcloud based on credHelpers entry in /Users/edwardnewman/.docker/config.json
DEBU[0001] Found credentials for gcr.io in credential helper containers-auth.json in file /Users/edwardnewman/.docker/config.json
DEBU[0001] No credentials matching staging-k8s.gcr.io found in /Users/edwardnewman/.config/containers/auth.json
DEBU[0001] No credentials matching staging-k8s.gcr.io found in /Users/edwardnewman/.config/containers/auth.json
DEBU[0001] Looking up in credential helper gcloud based on credHelpers entry in /Users/edwardnewman/.docker/config.json
DEBU[0001] Found credentials for staging-k8s.gcr.io in credential helper containers-auth.json in file /Users/edwardnewman/.docker/config.json
DEBU[0001] No credentials matching asia.gcr.io found in /Users/edwardnewman/.config/containers/auth.json
DEBU[0001] No credentials matching asia.gcr.io found in /Users/edwardnewman/.config/containers/auth.json
DEBU[0001] Looking up in credential helper gcloud based on credHelpers entry in /Users/edwardnewman/.docker/config.json
DEBU[0002] Found credentials for asia.gcr.io in credential helper containers-auth.json in file /Users/edwardnewman/.docker/config.json
DEBU[0002] No credentials matching registry.neuvector.com found in /Users/edwardnewman/.config/containers/auth.json
DEBU[0002] No credentials matching registry.neuvector.com found in /Users/edwardnewman/.config/containers/auth.json
DEBU[0002] No credentials matching registry.neuvector.com found in /Users/edwardnewman/.dockercfg
DEBU[0002] No credentials for registry.neuvector.com found
DEBU[0002] Found credentials for docker.com in credential helper containers-auth.json in file /Users/edwardnewman/.config/containers/auth.json
DEBU[0002] No credentials matching eu.gcr.io found in /Users/edwardnewman/.config/containers/auth.json
DEBU[0002] No credentials matching eu.gcr.io found in /Users/edwardnewman/.config/containers/auth.json
DEBU[0002] Looking up in credential helper gcloud based on credHelpers entry in /Users/edwardnewman/.docker/config.json
DEBU[0003] Found credentials for eu.gcr.io in credential helper containers-auth.json in file /Users/edwardnewman/.docker/config.json
DEBU[0003] No credentials matching marketplace.gcr.io found in /Users/edwardnewman/.config/containers/auth.json
DEBU[0003] No credentials matching marketplace.gcr.io found in /Users/edwardnewman/.config/containers/auth.json
DEBU[0003] Looking up in credential helper gcloud based on credHelpers entry in /Users/edwardnewman/.docker/config.json
DEBU[0003] Found credentials for marketplace.gcr.io in credential helper containers-auth.json in file /Users/edwardnewman/.docker/config.json
DEBU[0003] No credentials matching us.gcr.io found in /Users/edwardnewman/.config/containers/auth.json
DEBU[0003] No credentials matching us.gcr.io found in /Users/edwardnewman/.config/containers/auth.json
DEBU[0003] Looking up in credential helper gcloud based on credHelpers entry in /Users/edwardnewman/.docker/config.json
DEBU[0004] Found credentials for us.gcr.io in credential helper containers-auth.json in file /Users/edwardnewman/.docker/config.json
DEBU[0004] No credentials matching docker.io found in /Users/edwardnewman/.config/containers/auth.json
DEBU[0004] No credentials matching docker.io found in /Users/edwardnewman/.config/containers/auth.json
DEBU[0004] No credentials matching docker.io found in /Users/edwardnewman/.dockercfg
DEBU[0004] No credentials for docker.io found
DEBU[0004] DoRequest Method: POST URI: http://d/v4.1.0/libpod/manifests/docker.com%2Fnycnewman%2Fweb-frontend:0.4/registry/docker.com%2Fnycnewman%2Fweb-frontend:0.4
DEBU[0010] Called push.PersistentPostRunE(podman-remote --log-level debug manifest push docker.com/nycnewman/web-frontend:0.4 docker.com/nycnewman/web-frontend:0.4)

@nycnewman
Copy link
Author

Please ignore and leave closed. Issue resolve. Wrong authentication credentials. Thanks

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 20, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 20, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. macos MacOS (OSX) related
Projects
None yet
Development

No branches or pull requests

4 participants