Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why podman_image ignore "registries.insecure" #23

Closed
atomlab opened this issue Apr 17, 2020 · 9 comments · Fixed by #170
Closed

Why podman_image ignore "registries.insecure" #23

atomlab opened this issue Apr 17, 2020 · 9 comments · Fixed by #170

Comments

@atomlab
Copy link

atomlab commented Apr 17, 2020

I have an error

- name: Pull an cardano image
  containers.podman.podman_image:
       name: "hub:9080/cardano"
       tag: 3.2.0-13
       auth_file: /root/.podman-auth.json
       state: present

TASK [currency/cardano-podman : Pull an cardano image] *************************************************
fatal: [ada-03]: FAILED! => {"changed": false, "msg": "Failed to pull image hub:9080/cardano:3.2.0-13. Error: Error: error pulling image "hub:9080/cardano:3.2.0-13": unable to pull hub:9080/cardano:3.2.0-13: unable to pull image: Error initializing source docker://hub:9080/cardano:3.2.0-13: error pinging docker registry hub:9080: Get https://hub:9080/v2/: http: server gave HTTP response to HTTPS client\n"}

But if I run podman pull manually it works fine.

root@ada-03:~# podman pull -q hub:9080/cardano:3.2.0-13
151ea90a6aa201aa601e29d19807ccd3726156808a6473fce5209f6d73b24e5e
root@ada-03:~#

Does podman_image ignore /etc/containers/registries.conf file?

# cat /etc/containers/registries.conf
[registries.search]
registries = ['docker.io', 'quay.io']

[registries.insecure]
registries = ['hub:9080', 'hub:9090']
@sshnaidm
Copy link
Member

@atomlab please provide podman version and podman info --debug. Ansible playbook and its output would be useful too. export ANSIBLE_DEBUG=1; ansible -vvvv playbook.yaml

@atomlab
Copy link
Author

atomlab commented Apr 20, 2020

@sshnaidm Thank you for response.

root@podman:~# podman version
Version:            1.9.0
RemoteAPI Version:  1
Go Version:         go1.10.1
OS/Arch:            linux/amd64
root@podman:~# podman info --debug
debug:
  compiler: gc
  gitCommit: ""
  goVersion: go1.10.1
  podmanVersion: 1.9.0
host:
  arch: amd64
  buildahVersion: 1.14.8
  cgroupVersion: v1
  conmon:
    package: 'conmon: /usr/libexec/podman/conmon'
    path: /usr/libexec/podman/conmon
    version: 'conmon version 2.0.15, commit: '
  cpus: 12
  distribution:
    distribution: ubuntu
    version: "18.04"
  eventLogger: journald
  hostname: podman
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 4.15.0-72-generic
  memFree: 64194744320
  memTotal: 64331997184
  ociRuntime:
    name: runc
    package: 'runc: /usr/sbin/runc'
    path: /usr/sbin/runc
    version: 'runc version spec: 1.0.1-dev'
  os: linux
  rootless: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 0
  swapTotal: 0
  uptime: 10m 4.78s
registries:
  hub:9080:
    Blocked: false
    Insecure: true
    Location: hub:9080
    MirrorByDigestOnly: false
    Mirrors: []
    Prefix: hub:9080
  hub:9090:
    Blocked: false
    Insecure: true
    Location: hub:9090
    MirrorByDigestOnly: false
    Mirrors: []
    Prefix: hub:9090
  search:
  - docker.io
  - quay.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/lib/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 1
  runRoot: /var/run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
# export ANSIBLE_DEBUG=1; ansible-playbook playbook.yml -l podman -t deploy:node -Dvvv

...
fatal: [podman]: FAILED! => {
    "changed": false,
    "invocation": {
        "module_args": {
            "auth_file": "/root/.podman-auth.json",
            "build": {
                "annotation": null,
                "cache": true,
                "force_rm": null,
                "format": "oci",
                "rm": true,
                "volume": null
            },
            "ca_cert_dir": null,
            "executable": "podman",
            "force": false,
            "name": "hub:9080/cardano",
            "password": null,
            "path": null,
            "pull": true,
            "push": false,
            "push_args": {
                "compress": null,
                "dest": null,
                "format": null,
                "remove_signatures": null,
                "sign_by": null,
                "transport": null
            },
            "state": "present",
            "tag": "3.2.0-13",
            "username": null,
            "validate_certs": true
        }
    },
    "msg": "Failed to pull image hub:9080/cardano:3.2.0-13"
}
...

Output log

Podman pull manually works fine.

root@podman:~# podman pull hub:9080/cardano:3.2.0-13
Trying to pull hub:9080/cardano:3.2.0-13...
Getting image source signatures
Copying blob 5bed26d33875 done
Copying blob acc60e104895 done
Copying blob 930bda195c84 done
Copying blob f11b29a9c730 done
Copying blob 40ee0d0205f4 done
Copying blob 78bf9a5ad49e done
Copying blob 91386f096b17 done
Copying blob b1e2f97182ff done
Copying blob d6c9c246832e done
Copying blob 731e8217b58b done
Copying blob 7c5ae171e9cc done
Copying blob a712c48838d0 done
Copying blob f2db5ba09cf6 done
Copying blob e551179d1940 done
Copying config 151ea90a6a done
Writing manifest to image destination
Storing signatures
151ea90a6aa201aa601e29d19807ccd3726156808a6473fce5209f6d73b24e5e

@sshnaidm
Copy link
Member

@atomlab could it be because of auth_file: /root/.podman-auth.json? Maybe you can try without it.
Try to use podman command with this arg: podman pull --authfile /root/.podman-auth.json hub:9080/cardano:3.2.0-13.
Usually the default for auth file is ${XDG_RUNTIME_DIR}/containers/auth.json
If you can try to use this file http://paste.openstack.org/show/792416/ as podman_image.py, just copy it to ~/.ansible/collections/ansible_collections/containers/podman/plugins/modules/podman_image.py. I added a few more logs there.

@sshnaidm
Copy link
Member

@atomlab just to be sure, the registries config is configured on host where module runs, right? Not on host that ansible runs on.

@atomlab
Copy link
Author

atomlab commented Apr 20, 2020

@atomlab just to be sure, the registries config is configured on host where module runs, right? Not on host that ansible runs on.

Of course. This is host system where podman module runs. Not ansible host.

root@podman:~# cat /etc/containers/registries.conf

[registries.search]
registries = ['docker.io', 'quay.io']

[registries.insecure]
registries = ['hub:9080', 'hub:9090']
  1. I have updated podman_image.py.
  2. I Have run podman login and auth.json have been saved into /run/containers/0/auth.json
root@podman:~# cat  /run/containers/0/auth.json

{
	"auths": {
		"hub:9080": {
			"auth": "amVua2luczpFVGVCWThLUWpod0RibnR5d0Y1Mg=="
		},
		"hub:9090": {
			"auth": "amVua2luczpFVGVCWThLUWpod0RibnR5d0Y1Mg=="
		}
	}
}

Output playbook

fatal: [podman]: FAILED! => {
    "changed": false,
    "invocation": {
        "module_args": {
            "auth_file": null,
            "build": {
                "annotation": null,
                "cache": true,
                "force_rm": null,
                "format": "oci",
                "rm": true,
                "volume": null
            },
            "ca_cert_dir": null,
            "executable": "podman",
            "force": false,
            "name": "hub:9080/cardano",
            "password": null,
            "path": null,
            "pull": true,
            "push": false,
            "push_args": {
                "compress": null,
                "dest": null,
                "format": null,
                "remove_signatures": null,
                "sign_by": null,
                "transport": null
            },
            "state": "present",
            "tag": "3.2.0-13",
            "username": null,
            "validate_certs": true
        }
    },
    "msg": "Failed to pull image hub:9080/cardano:3.2.0-13stdout:  stderr: Error: error pulling image \"hub:9080/cardano:3.2.0-13\": unable to pull hub:9080/cardano:3.2.0-13: unable to pull image: Error initializing source docker://hub:9080/cardano:3.2.0-13: error pinging docker registry hub:9080: Get https://hub:9080/v2/: http: server gave HTTP response to HTTPS client\n"
}

Error

"msg": "Failed to pull image hub:9080/cardano:3.2.0-13stdout: stderr: Error: error pulling image "hub:9080/cardano:3.2.0-13": unable to pull hub:9080/cardano:3.2.0-13: unable to pull image: Error initializing source docker://hub:9080/cardano:3.2.0-13: error pinging docker registry hub:9080: Get https://hub:9080/v2/: http: server gave HTTP response to HTTPS client\n"

Get podman info --debug with ansible

- name: get podman info --debug
  command: podman info --debug
  register: podman_info 

Output

"stdout_lines": [
        "debug:",
        "  compiler: gc",
        "  gitCommit: \"\"",
        "  goVersion: go1.10.1",
        "  podmanVersion: 1.9.0",
        "host:",
        "  arch: amd64",
        "  buildahVersion: 1.14.8",
        "  cgroupVersion: v1",
        "  conmon:",
        "    package: 'conmon: /usr/libexec/podman/conmon'",
        "    path: /usr/libexec/podman/conmon",
        "    version: 'conmon version 2.0.15, commit: '",
        "  cpus: 12",
        "  distribution:",
        "    distribution: ubuntu",
        "    version: \"18.04\"",
        "  eventLogger: journald",
        "  hostname: podman",
        "  idMappings:",
        "    gidmap: null",
        "    uidmap: null",
        "  kernel: 4.15.0-72-generic",
        "  memFree: 63848034304",
        "  memTotal: 64331997184",
        "  ociRuntime:",
        "    name: runc",
        "    package: 'runc: /usr/sbin/runc'",
        "    path: /usr/sbin/runc",
        "    version: 'runc version spec: 1.0.1-dev'",
        "  os: linux",
        "  rootless: false",
        "  slirp4netns:",
        "    executable: \"\"",
        "    package: \"\"",
        "    version: \"\"",
        "  swapFree: 0",
        "  swapTotal: 0",
        "  uptime: 2h 46m 3.98s (Approximately 0.08 days)",
        "registries:",
        "  hub:9080:",
        "    Blocked: false",
        "    Insecure: true",
        "    Location: hub:9080",
        "    MirrorByDigestOnly: false",
        "    Mirrors: []",
        "    Prefix: hub:9080",
        "  hub:9090:",
        "    Blocked: false",
        "    Insecure: true",
        "    Location: hub:9090",
        "    MirrorByDigestOnly: false",
        "    Mirrors: []",
        "    Prefix: hub:9090",
        "  search:",
        "  - docker.io",
        "  - quay.io",
        "store:",
        "  configFile: /etc/containers/storage.conf",
        "  containerStore:",
        "    number: 0",
        "    paused: 0",
        "    running: 0",
        "    stopped: 0",
        "  graphDriverName: overlay",
        "  graphOptions: {}",
        "  graphRoot: /var/lib/containers/storage",
        "  graphStatus:",
        "    Backing Filesystem: extfs",
        "    Native Overlay Diff: \"true\"",
        "    Supports d_type: \"true\"",
        "    Using metacopy: \"false\"",
        "  imageStore:",
        "    number: 0",
        "  runRoot: /var/run/containers/storage",
        "  volumePath: /var/lib/containers/storage/volumes"
    ]
}

Run podman info --debug manually on the podman host

root@podman:~# podman info --debug
debug:
  compiler: gc
  gitCommit: ""
  goVersion: go1.10.1
  podmanVersion: 1.9.0
host:
  arch: amd64
  buildahVersion: 1.14.8
  cgroupVersion: v1
  conmon:
    package: 'conmon: /usr/libexec/podman/conmon'
    path: /usr/libexec/podman/conmon
    version: 'conmon version 2.0.15, commit: '
  cpus: 12
  distribution:
    distribution: ubuntu
    version: "18.04"
  eventLogger: journald
  hostname: podman
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 4.15.0-72-generic
  memFree: 63875788800
  memTotal: 64331997184
  ociRuntime:
    name: runc
    package: 'runc: /usr/sbin/runc'
    path: /usr/sbin/runc
    version: 'runc version spec: 1.0.1-dev'
  os: linux
  rootless: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 0
  swapTotal: 0
  uptime: 2h 38m 23.83s (Approximately 0.08 days)
registries:
  hub:9080:
    Blocked: false
    Insecure: true
    Location: hub:9080
    MirrorByDigestOnly: false
    Mirrors: []
    Prefix: hub:9080
  hub:9090:
    Blocked: false
    Insecure: true
    Location: hub:9090
    MirrorByDigestOnly: false
    Mirrors: []
    Prefix: hub:9090
  search:
  - docker.io
  - quay.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/lib/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 0
  runRoot: /var/run/containers/storage
  volumePath: /var/lib/containers/storage/volumes

root@podman:~#

@sshnaidm
Copy link
Member

@atomlab any progress on that or new info?
Is it still an issue?

@atomlab
Copy link
Author

atomlab commented May 10, 2020

@sshnaidm unfortunately I don't found solution of this problem at the moment. I will try debug podman python library later.

@swarred
Copy link

swarred commented Sep 25, 2020

@atomlab @sshnaidm / To whom this helps, this is resolved by acknowledging the parameter documented here, setting validate_certs: false, not sure why the registries.conf is not acknowledged, assuming this might be related to ansible default user that is used?

@ghost
Copy link

ghost commented Oct 19, 2020

I am experiencing the same problem.

I think this is because validate_certs defaults to True here:

validate_certs=dict(type='bool', default=True, aliases=['tlsverify', 'tls_verify']),

So later --tls-verify is always added, because it is not None:

if self.validate_certs is not None:
if self.validate_certs:
args.append('--tls-verify')
else:
args.append('--tls-verify=false')

Workaround
Set validate_certs to null in your ansible task.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants