Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"sudo: unable to resolve host" when running sudo in pod #3547

Closed
matpen opened this issue Jul 10, 2019 · 5 comments · Fixed by #3741
Closed

"sudo: unable to resolve host" when running sudo in pod #3547

matpen opened this issue Jul 10, 2019 · 5 comments · Fixed by #3741
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@matpen
Copy link

matpen commented Jul 10, 2019

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

When running sudo in a container within a pod, a warning similar to the following is emitted on console:

sudo: unable to resolve host a0b3a5d3b2a7

This does not happen when running the same image outside of a pod, and seems to be related to the /etc/hosts file (see below).

Steps to reproduce the issue:

  1. Build an image derived from ubuntu:16.04 (let this image be myimage, introduce a system user (e.g. myuser) and install sudo (apt-get -yq install sudo);

  2. Run a dummy command via sudo using the image outside any pod, and notice that the command succeeds without warning: podman run --rm localhost/myimage:latest sudo -u myuser true

  3. Repeat the test after adding --pod new:test_pod to the command line, thereby running the image inside a pod, and notice that a warning is output: podman run --pod new:test_pod --rm localhost/myimage:latest sudo -u myuser true

Describe the results you received:

The warning sudo: unable to resolve host a0b3a5d3b2a7 is printed. Apart from that, the command seems to succeed, but I am unsure about other implications.

Describe the results you expected:

No warning should be printed.

Additional information you deem important (e.g. issue happens only occasionally):

This seems to be due to the /etc/hosts file, see below. Without pod:

matt@matt-laptop:~$ podman run --rm localhost/myimage:latest cat /etc/hosts
127.0.0.1	localhost
127.0.1.1	mat-laptop # NOTICE THIS

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
10.88.3.105	5d616f9a3ab4 # NOTICE THIS

With pod:

matt@matt-laptop:~$ podman run --pod test_pod --rm localhost/myimage:latest cat /etc/hosts
127.0.0.1	localhost
127.0.1.1	matt-laptop # NOTICE THIS

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
10.88.3.104	mrsdalloway # NOTICE THIS

After a quick research, the following issues might be related: #1745, #2504, #3405

Output of podman version:

Version:            1.4.2
RemoteAPI Version:  1
Go Version:         go1.12
Git Commit:         9b6a98cfd7813513e5697888baa93318395a2055
Built:              Fri Jun 28 22:48:29 2019
OS/Arch:            linux/amd64

Output of podman info --debug:

debug:
  compiler: gc
  git commit: 9b6a98cfd7813513e5697888baa93318395a2055
  go version: go1.12
  podman version: 1.4.2
host:
  BuildahVersion: 1.9.0
  Conmon:
    package: Unknown
    path: /usr/local/libexec/crio/conmon
    version: 'conmon version 0.3.0, commit: 8455ce1ef385120deb827d0f0588c04357bad4c4'
  Distribution:
    distribution: ubuntu
    version: "16.04"
  MemFree: 10103779328
  MemTotal: 16684048384
  OCIRuntime:
    package: 'runc: /usr/sbin/runc'
    path: /usr/sbin/runc
    version: 'runc version spec: 1.0.1-dev'
  SwapFree: 17036734464
  SwapTotal: 17036734464
  arch: amd64
  cpus: 8
  hostname: matt-laptop
  kernel: 4.4.0-154-generic
  os: linux
  rootless: false
  uptime: 37m 21.7s
registries:
  blocked: null
  insecure: null
  search: null
store:
  ConfigFile: /etc/containers/storage.conf
  ContainerStore:
    number: 18
  GraphDriverName: overlay
  GraphOptions: null
  GraphRoot: /home/matt/data/podman/root
  GraphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  ImageStore:
    number: 109
  RunRoot: /var/run/containers/storage
  VolumePath: /home/matt/data/podman/root/volumes

Additional environment details (AWS, VirtualBox, physical, etc.):

Physical host (laptop):

matt@matt-laptop:~$ uname -a
Linux matt-laptop 4.4.0-154-generic #181-Ubuntu SMP Tue Jun 25 05:29:03 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
matt@matt-laptop:~$ lsb_release -d
Description:	Ubuntu 16.04.6 LTS
@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jul 10, 2019
@mheon
Copy link
Member

mheon commented Jul 10, 2019

Interesting... We should be sharing UTS and network namespaces, so they should share hostname and /etc/hosts across all containers in the pod.

Is your pod, by chance, named mrsdalloway (I'm assuming that's how we sourced the pod's hostname).

@matpen
Copy link
Author

matpen commented Jul 10, 2019

Is your pod, by chance, named mrsdalloway (I'm assuming that's how we sourced the pod's hostname)

Nope, no idea where the name comes from: seems like randomly generated to me. However, I just tested and if I use a different pod, the IP changes, but the name is the same.

matt@matt-laptop:~$ podman run --pod new:test_pod2 --rm localhost/myimage:latest cat /etc/hosts
127.0.0.1	localhost
127.0.1.1	matt-laptop

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
10.88.3.106	mrsdalloway

@rhatdan
Copy link
Member

rhatdan commented Aug 5, 2019

I think we need to raise this to a much higher priority.

@mheon
Copy link
Member

mheon commented Aug 5, 2019

@baude We should discuss this one at bug scrub tomorrow, get it assigned

@haircommander
Copy link
Collaborator

/assign
seems to be related to #3732 and a couple of other things

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 23, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 23, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants