Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman command display warning for systemd even cgroupfs is specified #12802

Closed
xunpan opened this issue Jan 11, 2022 · 3 comments · Fixed by #12834
Closed

podman command display warning for systemd even cgroupfs is specified #12802

xunpan opened this issue Jan 11, 2022 · 3 comments · Fixed by #12834
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@xunpan
Copy link

xunpan commented Jan 11, 2022

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Steps to reproduce the issue:

  1. a root usesr run podman info in cgroupv2 environment with default congfiuration
[lsfadmin@aquacade1 ~]$ cat /etc/os-release 
NAME="Red Hat Enterprise Linux"
VERSION="8.4 (Ootpa)"
[root@aquacade1 test]# podman info | grep cgroup
  cgroupControllers:
  cgroupManager: systemd
  cgroupVersion: v2
[root@aquacade1 test]# podman info --cgroup-manager cgroupfs| grep cgroup                                                                                                                
  cgroupControllers:
  cgroupManager: cgroupfs
  cgroupVersion: v2

This shows that --cgroup-manager works as global argument.

  1. however for non root user, it seems not true but with warning message.
[lsfadmin@aquacade1 ~]$ podman info > /dev/null
WARN[0000] The cgroupv2 manager is set to systemd but there is no systemd user session available                                                                                         
WARN[0000] For using systemd, you may need to login using an user session
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1000` (possibly as root)                                                                                
WARN[0000] Falling back to --cgroup-manager=cgroupfs
WARN[0000] The cgroupv2 manager is set to systemd but there is no systemd user session available                                                                                         
WARN[0000] For using systemd, you may need to login using an user session
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1000` (possibly as root)                                                                                
WARN[0000] Falling back to --cgroup-manager=cgroupfs

That's fine. That means systemd is used by default. I'd like to specify cgroupfs to remove these warnings but failed.

[lsfadmin@aquacade1 ~]$ podman info --cgroup-manager=cgroupfs > /dev/null                                                                                                                
WARN[0000] The cgroupv2 manager is set to systemd but there is no systemd user session available                                                                                         
WARN[0000] For using systemd, you may need to login using an user session
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1000` (possibly as root)                                                                                
WARN[0000] Falling back to --cgroup-manager=cgroupfs
WARN[0000] The cgroupv2 manager is set to systemd but there is no systemd user session available                                                                                         
WARN[0000] For using systemd, you may need to login using an user session
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1000` (possibly as root)                                                                                
WARN[0000] Falling back to --cgroup-manager=cgroupfs

Describe the results you received:
When --cgroup-manager=cgroupfs is specified in podman command line, it still report systemd is used with warning.

Describe the results you expected:
There is no warning and podman info --cgroup-manager=cgroupfs shows cgroupfs as cgroup manager.

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

host:
  arch: amd64
  buildahVersion: 1.22.3
  cgroupControllers: []
  cgroupManager: cgroupfs
  cgroupVersion: v2
  conmon:
    package: conmon-2.0.29-1.module+el8.4.0+11822+6cc1e7d7.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.29, commit: ae467a0c8001179d4d0adf4ada381108a893d7ec'
  cpus: 4
  distribution:
    distribution: '"rhel"'
    version: "8.4"
  eventLogger: file
  hostname: aquacade1
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 4.18.0-305.25.1.el8_4.x86_64
  linkmode: dynamic
  memFree: 6418661376
  memTotal: 8145698816
  ociRuntime:
    name: runc
    package: runc-1.0.0-74.rc95.module+el8.4.0+11822+6cc1e7d7.x86_64
    path: /usr/bin/runc
    version: |-
      runc version spec: 1.0.2-dev
      go: go1.15.13
      libseccomp: 2.5.1
  os: linux
  remoteSocket:
    path: /tmp/podman-run-1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.8-1.module+el8.4.0+11822+6cc1e7d7.x86_64
    version: |-
      slirp4netns version 1.1.8
      commit: d361001f495417b880f20329121e3aa431a8f90f
      libslirp: 4.3.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.1
  swapFree: 17179865088
  swapTotal: 17179865088
  uptime: 25h 5m 41.74s (Approximately 1.04 days)
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - registry.centos.org
  - docker.io
store:
  configFile: /home/lsfadmin/.config/containers/storage.conf
  containerStore:
    number: 3
    paused: 0
    running: 2
    stopped: 1
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-1.6-1.module+el8.4.0+11822+6cc1e7d7.x86_64
      Version: |-
        fusermount3 version: 3.2.1
        fuse-overlayfs: version 1.6
        FUSE library version 3.2.1
        using FUSE kernel interface version 7.26
  graphRoot: /home/lsfadmin/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 1
  runRoot: /tmp/podman-run-1000/containers
  volumePath: /home/lsfadmin/.local/share/containers/storage/volumes
version:
  APIVersion: 3.3.1
  Built: 1632213702
  BuiltTime: Tue Sep 21 01:41:42 2021
  GitCommit: ""
  GoVersion: go1.16.7
  OsArch: linux/amd64
  Version: 3.3.1


Output of podman info --debug:

(paste your output here)

Package info (e.g. output of rpm -q podman or apt list podman):

(paste your output here)

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/master/troubleshooting.md)

Yes/No

Additional environment details (AWS, VirtualBox, physical, etc.):

@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label Jan 11, 2022
@mheon
Copy link
Member

mheon commented Jan 12, 2022

@giuseppe PTAL

@giuseppe
Copy link
Member

giuseppe commented Jan 12, 2022

we check the cgroup driver too early, the following patch solves the issue for me:

diff --git a/libpod/runtime.go b/libpod/runtime.go
index 9794b3605..73cb5dc78 100644
--- a/libpod/runtime.go
+++ b/libpod/runtime.go
@@ -170,7 +170,6 @@ func NewRuntime(ctx context.Context, options ...RuntimeOption) (*Runtime, error)
        if err != nil {
                return nil, err
        }
-       conf.CheckCgroupsAndAdjustConfig()
        return newRuntimeFromConfig(ctx, conf, options...)
 }
 
@@ -228,6 +227,8 @@ func newRuntimeFromConfig(ctx context.Context, conf *config.Config, options ...R
                return nil, err
        }
 
+       conf.CheckCgroupsAndAdjustConfig()
+
        return runtime, nil

I'll open a PR

giuseppe added a commit to giuseppe/libpod that referenced this issue Jan 12, 2022
move the check after the cgroup manager is set, so to correctly detect
--cgroup-manager=cgroupfs and do not raise a warning about dbus not
being present.

Closes: containers#12802

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
giuseppe added a commit to giuseppe/libpod that referenced this issue Jan 12, 2022
move the check after the cgroup manager is set, so to correctly detect
--cgroup-manager=cgroupfs and do not raise a warning about dbus not
being present.

Closes: containers#12802

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
@giuseppe
Copy link
Member

opened a PR: #12834

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 21, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 21, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants