Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Podman containers running in "Created" status #11539

Closed
esantoro opened this issue Sep 11, 2021 · 7 comments
Closed

Podman containers running in "Created" status #11539

esantoro opened this issue Sep 11, 2021 · 7 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@esantoro
Copy link

/kind bug

Description

As a non-root user, having started some containers, after a few days they will appear in "created" state but are actually running.

This is somewhat a duplicate of issue #11478 except i'm using neither systemd nor podman play kubernetes to start pods.

Steps to reproduce the issue:

  1. Use any docker-compose.yml file, launch some containers

  2. wait some days

  3. run podman ps -a

Describe the results you received:

[containers@narnia ~]$ podman ps -a
CONTAINER ID  IMAGE                                         COMMAND               CREATED      STATUS         PORTS                     NAMES
c22870b15fe8  registry.access.redhat.com/ubi8/pause:latest  infinity              13 days ago  Created        0.0.0.0:8086->8086/tcp    961de186fff2-infra
97a6067f23d8  docker.io/library/influxdb:2.0.6              influxd               13 days ago  Created        0.0.0.0:8086->8086/tcp    influxdb_influxd
e46788e4bdd3  registry.access.redhat.com/ubi8/pause:latest  infinity              13 days ago  Created        127.0.0.1:8000->80/tcp    05334ea36ec2-infra
2ff2580d362d  docker.io/library/redis:6-alpine              redis-server          13 days ago  Created        127.0.0.1:8000->80/tcp    nextcloud-redis
c0737d27493e  docker.io/library/postgres:11-alpine          postgres              13 days ago  Created        127.0.0.1:8000->80/tcp    nextcloud-pgsql
bdec730cf70b  docker.io/library/nextcloud:22.1-apache       apache2-foregroun...  13 days ago  Created        127.0.0.1:8000->80/tcp    nextcloud-nextcloud
88d291263020  registry.access.redhat.com/ubi8/pause:latest  infinity              13 days ago  Created        127.0.0.1:9000->9000/tcp  34f9d4401dd1-infra
0273f33badc0  docker.io/thelounge/thelounge:4.2.0-alpine    thelounge start       13 days ago  Created        127.0.0.1:9000->9000/tcp  tl_thelounge
d795806014ae  registry.access.redhat.com/ubi8/pause:latest  infinity              13 days ago  Created        0.0.0.0:8200->8200/tcp    4cb19623b65a-infra
49c35e1f86be  docker.io/library/vault:1.6.3                 vault server -con...  13 days ago  Created        0.0.0.0:8200->8200/tcp    vault_vault
cf81e7a54cea  registry.access.redhat.com/ubi8/pause:latest  infinity              6 days ago   Up 6 days ago  127.0.0.1:8080->80/tcp    51ccfcf5f3c7-infra
4c0711394b54  docker.io/library/mediawiki:1.36.0            apache2-foregroun...  6 days ago   Up 6 days ago  127.0.0.1:8080->80/tcp    mw_mediawiki
9a850763f089  docker.io/library/mariadb:10.5-bionic         mysqld                6 days ago   Up 6 days ago  127.0.0.1:8080->80/tcp    mw_mariadb
a84fb02f68e8  docker.io/library/memcached:1.6.7-alpine      memcached -l 0.0....  6 days ago   Up 6 days ago  127.0.0.1:8080->80/tcp    mw_memcached
[containers@narnia ~]$ 

Also by inspecting a sample container:

[containers@narnia ~]$ podman inspect nextcloud-nextcloud 
[
    {
        "Id": "bdec730cf70b5f251b7fdd31e24b085a7b67044b513ba79e334e52b319c20c6c",
        "Created": "2021-08-29T21:01:06.439066161+02:00",
        "Path": "/entrypoint.sh",
        "Args": [
            "apache2-foreground"
        ],
        "State": {
            "OciVersion": "1.0.2-dev",
            "Status": "configured",
            "Running": false,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 0,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2021-08-29T21:01:17.079740557+02:00",
            "FinishedAt": "0001-01-01T00:00:00Z",
            "Healthcheck": {
                "Status": "",
                "FailingStreak": 0,
                "Log": null
            }
        },

It's not Running, it's not Paused, it's not Restarting, its not OOMKilled, it's not dead. There's no pid.

This does not look like a consistent behavior yet podman doesn't even emit a warning or anything.

Describe the results you expected:

I expect to see the containers still up:

[containers@narnia ~]$ podman ps -a
CONTAINER ID  IMAGE                                         COMMAND               CREATED      STATUS         PORTS                     NAMES
c22870b15fe8  registry.access.redhat.com/ubi8/pause:latest  infinity              13 days ago  Up 13 days ago        0.0.0.0:8086->8086/tcp    961de186fff2-infra
97a6067f23d8  docker.io/library/influxdb:2.0.6              influxd               13 days ago  Up 13 days ago        0.0.0.0:8086->8086/tcp    influxdb_influxd
e46788e4bdd3  registry.access.redhat.com/ubi8/pause:latest  infinity              13 days ago  Up 13 days ago        127.0.0.1:8000->80/tcp    05334ea36ec2-infra
2ff2580d362d  docker.io/library/redis:6-alpine              redis-server          13 days ago  Up 13 days ago        127.0.0.1:8000->80/tcp    nextcloud-redis
c0737d27493e  docker.io/library/postgres:11-alpine          postgres              13 days ago  Up 13 days ago        127.0.0.1:8000->80/tcp    nextcloud-pgsql
bdec730cf70b  docker.io/library/nextcloud:22.1-apache       apache2-foregroun...  13 days ago  Up 13 days ago        127.0.0.1:8000->80/tcp    nextcloud-nextcloud
88d291263020  registry.access.redhat.com/ubi8/pause:latest  infinity              13 days ago  Up 13 days ago        127.0.0.1:9000->9000/tcp  34f9d4401dd1-infra
0273f33badc0  docker.io/thelounge/thelounge:4.2.0-alpine    thelounge start       13 days ago  Up 13 days ago        127.0.0.1:9000->9000/tcp  tl_thelounge
d795806014ae  registry.access.redhat.com/ubi8/pause:latest  infinity              13 days ago  Up 13 days ago        0.0.0.0:8200->8200/tcp    4cb19623b65a-infra
49c35e1f86be  docker.io/library/vault:1.6.3                 vault server -con...  13 days ago  Up 13 days ago        0.0.0.0:8200->8200/tcp    vault_vault
cf81e7a54cea  registry.access.redhat.com/ubi8/pause:latest  infinity              6 days ago   Up 6 days ago  127.0.0.1:8080->80/tcp    51ccfcf5f3c7-infra
4c0711394b54  docker.io/library/mediawiki:1.36.0            apache2-foregroun...  6 days ago   Up 6 days ago  127.0.0.1:8080->80/tcp    mw_mediawiki
9a850763f089  docker.io/library/mariadb:10.5-bionic         mysqld                6 days ago   Up 6 days ago  127.0.0.1:8080->80/tcp    mw_mariadb
a84fb02f68e8  docker.io/library/memcached:1.6.7-alpine      memcached -l 0.0....  6 days ago   Up 6 days ago  127.0.0.1:8080->80/tcp    mw_memcached
[containers@narnia ~]$ 

Additional information you deem important (e.g. issue happens only occasionally):

It's been a while, happens almost regularly.

The problem is that I cannot exec into containers or follow the logs from the command line.

Output of podman version:

[containers@narnia ~]$ podman version
Version:      3.2.3
API Version:  3.2.3
Go Version:   go1.15.7
Built:        Tue Jul 27 09:29:39 2021
OS/Arch:      linux/amd64
[containers@narnia ~]$ 

Output of podman info --debug:

[containers@narnia ~]$ podman info --debug
host:
  arch: amd64
  buildahVersion: 1.21.3
  cgroupControllers: []
  cgroupManager: cgroupfs
  cgroupVersion: v1
  conmon:
    package: conmon-2.0.29-1.module+el8.4.0+11822+6cc1e7d7.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.29, commit: ae467a0c8001179d4d0adf4ada381108a893d7ec'
  cpus: 4
  distribution:
    distribution: '"rhel"'
    version: "8.4"
  eventLogger: file
  hostname: narnia.XXXXXXX.XXXX
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1004
      size: 1
    - container_id: 1
      host_id: 362144
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1004
      size: 1
    - container_id: 1
      host_id: 362144
      size: 65536
  kernel: 4.18.0-305.12.1.el8_4.x86_64
  linkmode: dynamic
  memFree: 5804662784
  memTotal: 16496693248
  ociRuntime:
    name: runc
    package: runc-1.0.0-74.rc95.module+el8.4.0+11822+6cc1e7d7.x86_64
    path: /usr/bin/runc
    version: |-
      runc version spec: 1.0.2-dev
      go: go1.15.13
      libseccomp: 2.5.1
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/1004/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.8-1.module+el8.4.0+11822+6cc1e7d7.x86_64
    version: |-
      slirp4netns version 1.1.8
      commit: d361001f495417b880f20329121e3aa431a8f90f
      libslirp: 4.3.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.1
  swapFree: 8417964032
  swapTotal: 8417964032
  uptime: 314h 53m 28.2s (Approximately 13.08 days)
registries:
  search:
  - docker.io
store:
  configFile: /home/containers/.config/containers/storage.conf
  containerStore:
    number: 14
    paused: 0
    running: 4
    stopped: 10
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-1.6-1.module+el8.4.0+11822+6cc1e7d7.x86_64
      Version: |-
        fusermount3 version: 3.2.1
        fuse-overlayfs: version 1.6
        FUSE library version 3.2.1
        using FUSE kernel interface version 7.26
  graphRoot: /home/containers/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 20
  runRoot: /tmp/podman-run-1004/containers
  volumePath: /home/containers/.local/share/containers/storage/volumes
version:
  APIVersion: 3.2.3
  Built: 1627370979
  BuiltTime: Tue Jul 27 09:29:39 2021
  GitCommit: ""
  GoVersion: go1.15.7
  OsArch: linux/amd64
  Version: 3.2.3

[containers@narnia ~]$ 

Package info (e.g. output of rpm -q podman or apt list podman):

[containers@narnia ~]$ rpm -q podman
podman-3.2.3-0.10.module+el8.4.0+11989+6676f7ad.x86_64

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/master/troubleshooting.md)

Yes.

At the time of writing there's no update candidate for podman in RHEL. I just tried running yum update podman just to be sure.

I have checked the troubleshooting guide, I didn't find anything matching the description of my problem.

Additional environment details (AWS, VirtualBox, physical, etc.):

In my system (RHEL 8.4 installed on a physical computer) the only peculiarities are:

  1. I'm using a dedicated, non-root user to run pods (lingering is enabled)
  2. I'm using podman-compose to start my containers (to re-use docker-compose files)
  3. volumes mounted via bind-mounts are backed by zfs datasets
  4. SELinux was set in permissive mode to be sure it's not interfering.
@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label Sep 11, 2021
@esantoro
Copy link
Author

Since I'm running rootless podman, here's the output of podman info --log-level=debug:

[containers@narnia ~]$ podman info --log-level=debug
INFO[0000] podman filtering at log level debug          
DEBU[0000] Called info.PersistentPreRunE(podman info --log-level=debug) 
DEBU[0000] Merged system config "/usr/share/containers/containers.conf" 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /home/containers/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Overriding run root "/run/user/1004/containers" with "/tmp/podman-run-1004/containers" from database 
DEBU[0000] Overriding tmp dir "/run/user/1004/libpod/tmp" with "/tmp/run-1004/libpod/tmp" from database 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /home/containers/.local/share/containers/storage 
DEBU[0000] Using run root /tmp/podman-run-1004/containers 
DEBU[0000] Using static dir /home/containers/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /tmp/run-1004/libpod/tmp       
DEBU[0000] Using volume path /home/containers/.local/share/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] Not configuring container store              
DEBU[0000] Initializing event backend file              
DEBU[0000] configured OCI runtime crun initialization failed: no valid executable found for OCI runtime crun: invalid argument 
DEBU[0000] configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument 
DEBU[0000] configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument 
DEBU[0000] Using OCI runtime "/usr/bin/runc"            
DEBU[0000] Default CNI network name podman is unchangeable 
INFO[0000] Setting parallel job count to 13             
INFO[0000] podman filtering at log level debug          
DEBU[0000] Called info.PersistentPreRunE(podman info --log-level=debug) 
DEBU[0000] overlay storage already configured with a mount-program 
DEBU[0000] Merged system config "/usr/share/containers/containers.conf" 
DEBU[0000] overlay storage already configured with a mount-program 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /home/containers/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Overriding run root "/run/user/1004/containers" with "/tmp/podman-run-1004/containers" from database 
DEBU[0000] Overriding tmp dir "/run/user/1004/libpod/tmp" with "/tmp/run-1004/libpod/tmp" from database 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /home/containers/.local/share/containers/storage 
DEBU[0000] Using run root /tmp/podman-run-1004/containers 
DEBU[0000] Using static dir /home/containers/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /tmp/run-1004/libpod/tmp       
DEBU[0000] Using volume path /home/containers/.local/share/containers/storage/volumes 
DEBU[0000] overlay storage already configured with a mount-program 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs 
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false 
DEBU[0000] Initializing event backend file              
DEBU[0000] configured OCI runtime crun initialization failed: no valid executable found for OCI runtime crun: invalid argument 
DEBU[0000] configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument 
DEBU[0000] configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument 
DEBU[0000] Using OCI runtime "/usr/bin/runc"            
DEBU[0000] Default CNI network name podman is unchangeable 
INFO[0000] Setting parallel job count to 13             
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf" 
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/000-shortnames.conf" 
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/001-rhel-shortnames.conf" 
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/002-rhel-shortnames-overrides.conf" 
host:
  arch: amd64
  buildahVersion: 1.21.3
  cgroupControllers: []
  cgroupManager: cgroupfs
  cgroupVersion: v1
  conmon:
    package: conmon-2.0.29-1.module+el8.4.0+11822+6cc1e7d7.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.29, commit: ae467a0c8001179d4d0adf4ada381108a893d7ec'
  cpus: 4
  distribution:
    distribution: '"rhel"'
    version: "8.4"
  eventLogger: file
  hostname: narnia.XXXXXXX.XX
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1004
      size: 1
    - container_id: 1
      host_id: 362144
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1004
      size: 1
    - container_id: 1
      host_id: 362144
      size: 65536
  kernel: 4.18.0-305.12.1.el8_4.x86_64
  linkmode: dynamic
  memFree: 5730873344
  memTotal: 16496693248
  ociRuntime:
    name: runc
    package: runc-1.0.0-74.rc95.module+el8.4.0+11822+6cc1e7d7.x86_64
    path: /usr/bin/runc
    version: |-
      runc version spec: 1.0.2-dev
      go: go1.15.13
      libseccomp: 2.5.1
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/1004/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.8-1.module+el8.4.0+11822+6cc1e7d7.x86_64
    version: |-
      slirp4netns version 1.1.8
      commit: d361001f495417b880f20329121e3aa431a8f90f
      libslirp: 4.3.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.1
  swapFree: 8417964032
  swapTotal: 8417964032
  uptime: 315h 52m 50.1s (Approximately 13.12 days)
registries:
  search:
  - docker.io
store:
  configFile: /home/containers/.config/containers/storage.conf
  containerStore:
    number: 14
    paused: 0
    running: 4
    stopped: 10
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-1.6-1.module+el8.4.0+11822+6cc1e7d7.x86_64
      Version: |-
        fusermount3 version: 3.2.1
        fuse-overlayfs: version 1.6
        FUSE library version 3.2.1
        using FUSE kernel interface version 7.26
  graphRoot: /home/containers/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 20
  runRoot: /tmp/podman-run-1004/containers
  volumePath: /home/containers/.local/share/containers/storage/volumes
version:
  APIVersion: 3.2.3
  Built: 1627370979
  BuiltTime: Tue Jul 27 09:29:39 2021
  GitCommit: ""
  GoVersion: go1.15.7
  OsArch: linux/amd64
  Version: 3.2.3

DEBU[0000] Called info.PersistentPostRunE(podman info --log-level=debug) 
[containers@narnia ~]$ 

@mheon
Copy link
Member

mheon commented Sep 12, 2021

Likely cause:
DEBU[0000] Using run root /tmp/podman-run-1004/containers

It looks like you're on a RHEL/Cent 8.4 system, which means systemd as PID 1. Systemd periodically clears out directories in /tmp, including that one, which we use to detect whether a system restart has detected. We ship a systemd-tmpfiles configuration that should tell systemd not to wipe our temporary files directory, but clearly it's not working.

Can you verify whether /usr/lib/tmpfiles.d/podman.conf exists and provide its contents if so?

@esantoro
Copy link
Author

Hi and thank you for your reply.

It is indeed a RHEL 8.4 system. The file you mention does exist and below are its contents:

[root@narnia: ~]
# cat /usr/lib/tmpfiles.d/podman.conf 
# /tmp/podman-run-* directory can contain content for Podman containers that have run
# for many days. This following line prevents systemd from removing this content.
x /tmp/podman-run-*
x /tmp/containers-user-*
D! /run/podman 0700 root root
D! /var/lib/cni/networks

@esantoro
Copy link
Author

I've been checking the systemd-tmpfiles mainly via systemd-tmpfiles --cat-config and found this line that might be interfering (although it shouldnt).

# /usr/lib/tmpfiles.d/tmp.conf
# ... cut ...
# Clear tmp directories separately, to make them easier to override
q /tmp 1777 root root 10d
# ... cut ...

It shouldn't be the problem, but the ten days age fits with the fact that the 7-days old containers are still in a consistent state and the 13 days old containers are in an inconsistent state.

According to the tmpfiles.d manpages, q behaves like v which in turns behaves like d. And the tmpfiles.d manpages section for d states that Contents of this directory are subject to time based cleanup if the age argument is specified.

This is the only path pattern that would match /tmp.

I'll be checking the containers again in about three days.

At this point honestly I think that it would probably be better to have that run root residing somewhere else.

@mheon can I change that to be somewhere like $HOME/.tmp ? Do you think that would be a good idea ?

@esantoro
Copy link
Author

@mheon I tried setting age to 6d (six days) in /usr/lib/tmpfiles.d/tmp.conf and then triggering a cleanup via systemd-tmpfiles --clean.

I'm now 99% sure that's the problem. Running podman ps -a I got a huge number of errors and now all the containers are in a created state:

[containers@narnia ~]$ podman ps -a
CONTAINER ID  IMAGE                                         COMMAND               CREATED      STATUS          PORTS                     NAMES
c22870b15fe8  registry.access.redhat.com/ubi8/pause:latest  infinity              13 days ago  Created         0.0.0.0:8086->8086/tcp    961de186fff2-infra
97a6067f23d8  docker.io/library/influxdb:2.0.6              influxd               13 days ago  Created         0.0.0.0:8086->8086/tcp    influxdb_influxd
e46788e4bdd3  registry.access.redhat.com/ubi8/pause:latest  infinity              13 days ago  Created         127.0.0.1:8000->80/tcp    05334ea36ec2-infra
2ff2580d362d  docker.io/library/redis:6-alpine              redis-server          13 days ago  Created         127.0.0.1:8000->80/tcp    nextcloud-redis
c0737d27493e  docker.io/library/postgres:11-alpine          postgres              13 days ago  Created         127.0.0.1:8000->80/tcp    nextcloud-pgsql
bdec730cf70b  docker.io/library/nextcloud:22.1-apache       apache2-foregroun...  13 days ago  Created         127.0.0.1:8000->80/tcp    nextcloud-nextcloud
88d291263020  registry.access.redhat.com/ubi8/pause:latest  infinity              13 days ago  Created         127.0.0.1:9000->9000/tcp  34f9d4401dd1-infra
0273f33badc0  docker.io/thelounge/thelounge:4.2.0-alpine    thelounge start       13 days ago  Created         127.0.0.1:9000->9000/tcp  tl_thelounge
cf81e7a54cea  registry.access.redhat.com/ubi8/pause:latest  infinity              7 days ago   Up 7 days ago   127.0.0.1:8080->80/tcp    51ccfcf5f3c7-infra
4c0711394b54  docker.io/library/mediawiki:1.36.0            apache2-foregroun...  7 days ago   Up 7 days ago   127.0.0.1:8080->80/tcp    mw_mediawiki
9a850763f089  docker.io/library/mariadb:10.5-bionic         mysqld                7 days ago   Up 7 days ago   127.0.0.1:8080->80/tcp    mw_mariadb
a84fb02f68e8  docker.io/library/memcached:1.6.7-alpine      memcached -l 0.0....  7 days ago   Up 7 days ago   127.0.0.1:8080->80/tcp    mw_memcached
7f67f7e92b67  registry.access.redhat.com/ubi8/pause:latest  infinity              2 hours ago  Up 2 hours ago  0.0.0.0:8200->8200/tcp    1d305d6f4cd9-infra
6760da60f696  docker.io/library/vault:1.6.3                 vault server -con...  2 hours ago  Up 2 hours ago  0.0.0.0:8200->8200/tcp    vault_vault
[containers@narnia ~]$ podman ps -a
ERRO[0000] Error refreshing container 0273f33badc0a9960cb90b56ea86373eef17433c8d7e04c2bae85243bb5b5218: error acquiring lock 17 for container 0273f33badc0a9960cb90b56ea86373eef17433c8d7e04c2bae85243bb5b5218: file exists 
ERRO[0000] Error refreshing container 2ff2580d362de28c1d120ddcf5ac28f7eee81a90728d6f10ba4016274d085908: error acquiring lock 11 for container 2ff2580d362de28c1d120ddcf5ac28f7eee81a90728d6f10ba4016274d085908: file exists 
ERRO[0000] Error refreshing container 4c0711394b54fc3750ab534c383544b13deb2b51e0301001366d11511e5ceb7e: error acquiring lock 5 for container 4c0711394b54fc3750ab534c383544b13deb2b51e0301001366d11511e5ceb7e: file exists 
ERRO[0000] Error refreshing container 6760da60f69644c210d913c223fb95066fa3c0765ff993e26a4ecf42ca494e84: error acquiring lock 19 for container 6760da60f69644c210d913c223fb95066fa3c0765ff993e26a4ecf42ca494e84: file exists 
ERRO[0000] Error refreshing container 7f67f7e92b671d403d2ad05cbd854e5a1310ef7a1fd682bdf2749bd046ee9036: error acquiring lock 18 for container 7f67f7e92b671d403d2ad05cbd854e5a1310ef7a1fd682bdf2749bd046ee9036: file exists 
ERRO[0000] Error refreshing container 88d291263020c7a02470c9fbf7bde662504a4c3c21c086802b271a06e5b944ad: error acquiring lock 16 for container 88d291263020c7a02470c9fbf7bde662504a4c3c21c086802b271a06e5b944ad: file exists 
ERRO[0000] Error refreshing container 97a6067f23d86865c16788628a744de261b02eaa332f0a431438b849ea7561c9: error acquiring lock 2 for container 97a6067f23d86865c16788628a744de261b02eaa332f0a431438b849ea7561c9: file exists 
ERRO[0000] Error refreshing container 9a850763f089227cf4e9cd2bce69cfd69a70399a04e7f9c705eaafead6633413: error acquiring lock 6 for container 9a850763f089227cf4e9cd2bce69cfd69a70399a04e7f9c705eaafead6633413: file exists 
ERRO[0000] Error refreshing container a84fb02f68e874ac62c86489b649c41472a29dea81916d40e070cba7d72117df: error acquiring lock 7 for container a84fb02f68e874ac62c86489b649c41472a29dea81916d40e070cba7d72117df: file exists 
ERRO[0000] Error refreshing container bdec730cf70b5f251b7fdd31e24b085a7b67044b513ba79e334e52b319c20c6c: error acquiring lock 14 for container bdec730cf70b5f251b7fdd31e24b085a7b67044b513ba79e334e52b319c20c6c: file exists 
ERRO[0000] Error refreshing container c0737d27493e8f16da3ccc62649ee0b23f7d0bd317ef4891d55601940c0d038d: error acquiring lock 13 for container c0737d27493e8f16da3ccc62649ee0b23f7d0bd317ef4891d55601940c0d038d: file exists 
ERRO[0000] Error refreshing container c22870b15fe8d5c6f5885f2f9239ddc6a58cf46c7389c132f7dbaa5e6740747e: error acquiring lock 1 for container c22870b15fe8d5c6f5885f2f9239ddc6a58cf46c7389c132f7dbaa5e6740747e: file exists 
ERRO[0000] Error refreshing container cf81e7a54cea312b1f6a6d5ea8c7f1f2e7a6c38eaf46adc3ea313b2ae5f4c364: error acquiring lock 4 for container cf81e7a54cea312b1f6a6d5ea8c7f1f2e7a6c38eaf46adc3ea313b2ae5f4c364: file exists 
ERRO[0000] Error refreshing container e46788e4bdd3077e4afaaa2412a764ccaf024e4c9d1b3d2c686bc10aeba789ce: error acquiring lock 10 for container e46788e4bdd3077e4afaaa2412a764ccaf024e4c9d1b3d2c686bc10aeba789ce: file exists 
ERRO[0000] Error refreshing pod 05334ea36ec2767a95b7e23f7abe0d4dcaaa97e95ecd84e1497fad57c5eb2392: error retrieving lock 9 for pod 05334ea36ec2767a95b7e23f7abe0d4dcaaa97e95ecd84e1497fad57c5eb2392: file exists 
ERRO[0000] Error refreshing pod 1d305d6f4cd94fe85e88d6e14fdb0e48b697ff379079c83537020d4bc22cd7df: error retrieving lock 8 for pod 1d305d6f4cd94fe85e88d6e14fdb0e48b697ff379079c83537020d4bc22cd7df: file exists 
ERRO[0000] Error refreshing pod 34f9d4401dd16a65b4786d93c966f2946fcdd4da0bef57f43a85803677074cee: error retrieving lock 15 for pod 34f9d4401dd16a65b4786d93c966f2946fcdd4da0bef57f43a85803677074cee: file exists 
ERRO[0000] Error refreshing pod 51ccfcf5f3c721c4a2c6cdb1e39c37b6b3b6bad7afd522ed331a922bdcc78fa7: error retrieving lock 3 for pod 51ccfcf5f3c721c4a2c6cdb1e39c37b6b3b6bad7afd522ed331a922bdcc78fa7: file exists 
ERRO[0000] Error refreshing pod 961de186fff2b3fb58070c29a4a068dc942ce1423c4755ca2e9a58aff2af25c0: error retrieving lock 0 for pod 961de186fff2b3fb58070c29a4a068dc942ce1423c4755ca2e9a58aff2af25c0: file exists 
ERRO[0000] Error refreshing volume 01bfa45f6747b0b49047f626537739fdb490938a7b2c29b166ba6343d36321d0: error acquiring lock 23 for volume 01bfa45f6747b0b49047f626537739fdb490938a7b2c29b166ba6343d36321d0: file exists 
ERRO[0000] Error refreshing volume 0a275cee93ebbfa21611347c1ce18576248fdeec5ae12b7d00fa7bbb5fbf5676: error acquiring lock 31 for volume 0a275cee93ebbfa21611347c1ce18576248fdeec5ae12b7d00fa7bbb5fbf5676: file exists 
ERRO[0000] Error refreshing volume 105af1a9d75d8dcd931fbdc6f27ffea1062f6550175d9961639b498bb33d06a8: error acquiring lock 16 for volume 105af1a9d75d8dcd931fbdc6f27ffea1062f6550175d9961639b498bb33d06a8: file exists 
ERRO[0000] Error refreshing volume 146a6ddf4b130509f6d3f9ac5ff96547ffb0f3ec36142fe23dc0b96deef0636d: error acquiring lock 4 for volume 146a6ddf4b130509f6d3f9ac5ff96547ffb0f3ec36142fe23dc0b96deef0636d: file exists 
ERRO[0000] Error refreshing volume 1aedb186f313ad0bb7b0a3d65a3a33f4226eca2e15f0b0789e8c0402749b242f: error acquiring lock 15 for volume 1aedb186f313ad0bb7b0a3d65a3a33f4226eca2e15f0b0789e8c0402749b242f: file exists 
ERRO[0000] Error refreshing volume 1cae45702e79e263f46841e17e4d946e77290e493d6b61e41287afc94a7dc259: error acquiring lock 3 for volume 1cae45702e79e263f46841e17e4d946e77290e493d6b61e41287afc94a7dc259: file exists 
ERRO[0000] Error refreshing volume 237cb45d55fadda581cf7784db57626ea7b1055d80c3bad655db09630663a296: error acquiring lock 36 for volume 237cb45d55fadda581cf7784db57626ea7b1055d80c3bad655db09630663a296: file exists 
ERRO[0000] Error refreshing volume 26e6c52a54a23cbba5834151ae2b028ab7843f3eccbea2dab2f994f811213166: error acquiring lock 11 for volume 26e6c52a54a23cbba5834151ae2b028ab7843f3eccbea2dab2f994f811213166: file exists 
ERRO[0000] Error refreshing volume 271f7e7efa8ce2e8051b6cc1ea64c4cf91148d5f7c7df6a9686111d33fee591a: error acquiring lock 16 for volume 271f7e7efa8ce2e8051b6cc1ea64c4cf91148d5f7c7df6a9686111d33fee591a: file exists 
ERRO[0000] Error refreshing volume 288486e925c9d4f1be6e95f9d6c090f47c1d8e984d5ed71d41d88bc80151984f: error acquiring lock 3 for volume 288486e925c9d4f1be6e95f9d6c090f47c1d8e984d5ed71d41d88bc80151984f: file exists 
ERRO[0000] Error refreshing volume 307b001abb8c3c27e59dad1e604e1fe2f279483db8c84f368d26e62a0fba0621: error acquiring lock 13 for volume 307b001abb8c3c27e59dad1e604e1fe2f279483db8c84f368d26e62a0fba0621: file exists 
ERRO[0000] Error refreshing volume 3239c8494df4182109a9e9ba14ca91736ab83638634f8adba953e7c94835ce1b: error acquiring lock 5 for volume 3239c8494df4182109a9e9ba14ca91736ab83638634f8adba953e7c94835ce1b: file exists 
ERRO[0000] Error refreshing volume 36198ecd6a0136eb3b0723be108f22e2d84b13126e82c1f7d8295b07648ababd: error acquiring lock 12 for volume 36198ecd6a0136eb3b0723be108f22e2d84b13126e82c1f7d8295b07648ababd: file exists 
ERRO[0000] Error refreshing volume 389c3b53334ad43a47114490aeae6d39c817e44c5a086b9f1d6f74c6af914104: error acquiring lock 18 for volume 389c3b53334ad43a47114490aeae6d39c817e44c5a086b9f1d6f74c6af914104: file exists 
ERRO[0000] Error refreshing volume 562ca8751a8a90d7112fca660b3f7d47b38c4a3bcb57277466d575947eab84fb: error acquiring lock 11 for volume 562ca8751a8a90d7112fca660b3f7d47b38c4a3bcb57277466d575947eab84fb: file exists 
ERRO[0000] Error refreshing volume 56a3bf411cecde074d5e8cdb30ab40a7d6a53826eb011a0b61784e254df2a529: error acquiring lock 16 for volume 56a3bf411cecde074d5e8cdb30ab40a7d6a53826eb011a0b61784e254df2a529: file exists 
ERRO[0000] Error refreshing volume 5b23e4702bdf0e493918d22f63bfd8e53bbcb037fade7fc68b20c2c1a2a61d25: error acquiring lock 1 for volume 5b23e4702bdf0e493918d22f63bfd8e53bbcb037fade7fc68b20c2c1a2a61d25: file exists 
ERRO[0000] Error refreshing volume 5ed332ff6a6a26407fbd6df2ed5029e7bfb4dc46f31d52d4611010a195c53a96: error acquiring lock 12 for volume 5ed332ff6a6a26407fbd6df2ed5029e7bfb4dc46f31d52d4611010a195c53a96: file exists 
ERRO[0000] Error refreshing volume 633eb0d73823dcb49951f976968fd5d031ebba633a77fcd6facfb5c7e71766ef: error acquiring lock 39 for volume 633eb0d73823dcb49951f976968fd5d031ebba633a77fcd6facfb5c7e71766ef: file exists 
ERRO[0000] Error refreshing volume 6974dc448cb8f38db05a2356fed74377048ac5100070d8741d59f2c08028d126: error acquiring lock 6 for volume 6974dc448cb8f38db05a2356fed74377048ac5100070d8741d59f2c08028d126: file exists 
ERRO[0000] Error refreshing volume 6b763e6bf466ee6f11f5116b3db7adcfbd5a0c91206b5eb6206d0a6e3510fb69: error acquiring lock 38 for volume 6b763e6bf466ee6f11f5116b3db7adcfbd5a0c91206b5eb6206d0a6e3510fb69: file exists 
ERRO[0000] Error refreshing volume 6bd0d4d287e7d4928a9fd1f4a20b859622b44200645113d37cbbd8252bf2ce21: error acquiring lock 4 for volume 6bd0d4d287e7d4928a9fd1f4a20b859622b44200645113d37cbbd8252bf2ce21: file exists 
ERRO[0000] Error refreshing volume 6cfe06afba13db80a2c30fa4883904a0deb82f42653c1ef390b16433ca9721cf: error acquiring lock 27 for volume 6cfe06afba13db80a2c30fa4883904a0deb82f42653c1ef390b16433ca9721cf: file exists 
ERRO[0000] Error refreshing volume 6d7c8241d643e3d3c6d7b833c3d2fb6294f1c1b11a3c65765c7e3356fa768232: error acquiring lock 25 for volume 6d7c8241d643e3d3c6d7b833c3d2fb6294f1c1b11a3c65765c7e3356fa768232: file exists 
ERRO[0000] Error refreshing volume 70dd9fd5c7b02b13a5f89d422d1d594bb825987ea93535f127cccc634fbd7bdb: error acquiring lock 34 for volume 70dd9fd5c7b02b13a5f89d422d1d594bb825987ea93535f127cccc634fbd7bdb: file exists 
ERRO[0000] Error refreshing volume 723409bf7c39f48799507f9dac11e0b454bcac37c13743b3d83625ad74025d7c: error acquiring lock 32 for volume 723409bf7c39f48799507f9dac11e0b454bcac37c13743b3d83625ad74025d7c: file exists 
ERRO[0000] Error refreshing volume 7749d0e09e3d42d95a0bf97cc1019450e6b8abc23cedf0f4950a68013f8c77e6: error acquiring lock 12 for volume 7749d0e09e3d42d95a0bf97cc1019450e6b8abc23cedf0f4950a68013f8c77e6: file exists 
ERRO[0000] Error refreshing volume 7d42d7a47b0c6a9a49d4687a418dd8c4b1494e6b5f3bb53da2f4dde6f26c8a6f: error acquiring lock 21 for volume 7d42d7a47b0c6a9a49d4687a418dd8c4b1494e6b5f3bb53da2f4dde6f26c8a6f: file exists 
ERRO[0000] Error refreshing volume 7f2b8c6ca06ebb9684ab6053285aaac28ea43cc091e7d40603d6fab18a260568: error acquiring lock 14 for volume 7f2b8c6ca06ebb9684ab6053285aaac28ea43cc091e7d40603d6fab18a260568: file exists 
ERRO[0000] Error refreshing volume 80c210f16bb6cd76f2929c3f4f7d32a129df703054bcd7896beb1f1931218f42: error acquiring lock 11 for volume 80c210f16bb6cd76f2929c3f4f7d32a129df703054bcd7896beb1f1931218f42: file exists 
ERRO[0000] Error refreshing volume 814ca4cc608aa90b31628637df1bede16aa5ecea1a8928d6ff4b5fefe6b66454: error acquiring lock 12 for volume 814ca4cc608aa90b31628637df1bede16aa5ecea1a8928d6ff4b5fefe6b66454: file exists 
ERRO[0000] Error refreshing volume 897396fc64b8d5405dd79a0c698ebeb151f0a5f79562244b16c71c7c310c867f: error acquiring lock 17 for volume 897396fc64b8d5405dd79a0c698ebeb151f0a5f79562244b16c71c7c310c867f: file exists 
ERRO[0000] Error refreshing volume 8a9ca53fd221f64f7dc8c3fecfd4986d57b7b75544b64943e626c0277eeef337: error acquiring lock 12 for volume 8a9ca53fd221f64f7dc8c3fecfd4986d57b7b75544b64943e626c0277eeef337: file exists 
ERRO[0000] Error refreshing volume 8caf65f2a44518bef33a0be75a70ed89eb51d58df5510fded46c80861aa8c44c: error acquiring lock 29 for volume 8caf65f2a44518bef33a0be75a70ed89eb51d58df5510fded46c80861aa8c44c: file exists 
ERRO[0000] Error refreshing volume 8df5f7c88edd501f2f2804607094565d9efe0ccf24076f691c6789c066c74863: error acquiring lock 26 for volume 8df5f7c88edd501f2f2804607094565d9efe0ccf24076f691c6789c066c74863: file exists 
ERRO[0000] Error refreshing volume 8dfcab4869b23975d5dce163abe16cbec7c2b8ff11a163c4e7f13baf3acacef3: error acquiring lock 20 for volume 8dfcab4869b23975d5dce163abe16cbec7c2b8ff11a163c4e7f13baf3acacef3: file exists 
ERRO[0000] Error refreshing volume 91e1aa67330d66a59a38c15a59037690095db3ae0e2b6609ff6688d4392ac182: error acquiring lock 35 for volume 91e1aa67330d66a59a38c15a59037690095db3ae0e2b6609ff6688d4392ac182: file exists 
ERRO[0000] Error refreshing volume 94a86342a32227e1dde85bf552489fea99d439f6f56b6d4e4f16b3b29088abc4: error acquiring lock 33 for volume 94a86342a32227e1dde85bf552489fea99d439f6f56b6d4e4f16b3b29088abc4: file exists 
ERRO[0000] Error refreshing volume 95d88fd709538fe12bb1e6fdab59f64a0f9532b1aec287f6e2922106733339fc: error acquiring lock 3 for volume 95d88fd709538fe12bb1e6fdab59f64a0f9532b1aec287f6e2922106733339fc: file exists 
ERRO[0000] Error refreshing volume 99ae93b5664386ad6503cbb6e77a1b675acb9b2329362c3201f840b9c7054efd: error acquiring lock 11 for volume 99ae93b5664386ad6503cbb6e77a1b675acb9b2329362c3201f840b9c7054efd: file exists 
ERRO[0000] Error refreshing volume 99ba7064f0a3e425e6f00250394408216da6c19a45fdcaa40a47979070ecbbb7: error acquiring lock 13 for volume 99ba7064f0a3e425e6f00250394408216da6c19a45fdcaa40a47979070ecbbb7: file exists 
ERRO[0000] Error refreshing volume 9a24cf7a3925bafaed27b42d76ff47b42a7acaeb957cc83b04be0e9179cd67b8: error acquiring lock 20 for volume 9a24cf7a3925bafaed27b42d76ff47b42a7acaeb957cc83b04be0e9179cd67b8: file exists 
ERRO[0000] Error refreshing volume 9b06441ecfdcd30a0f3c15ebf8d26b5dab37aad46dba3cfd44a0717714eac38e: error acquiring lock 16 for volume 9b06441ecfdcd30a0f3c15ebf8d26b5dab37aad46dba3cfd44a0717714eac38e: file exists 
ERRO[0000] Error refreshing volume 9b741c24835f3c6816669c9833f3e79631abf8f3a39c24154f448a8692f6a017: error acquiring lock 17 for volume 9b741c24835f3c6816669c9833f3e79631abf8f3a39c24154f448a8692f6a017: file exists 
ERRO[0000] Error refreshing volume a35e9b123858282d6fefd5a5279397fe1deae4f2b0c2f2abb174818e000ebdf6: error acquiring lock 19 for volume a35e9b123858282d6fefd5a5279397fe1deae4f2b0c2f2abb174818e000ebdf6: file exists 
ERRO[0000] Error refreshing volume a3c32b4de23e9d71a62b01d82d77daffb78cfe9368499ec1d768cf0c4f9499a3: error acquiring lock 17 for volume a3c32b4de23e9d71a62b01d82d77daffb78cfe9368499ec1d768cf0c4f9499a3: file exists 
ERRO[0000] Error refreshing volume a8d7e61880e46bd46ce68fa961dc5731f314630f9d3540445e0919d08041ce09: error acquiring lock 15 for volume a8d7e61880e46bd46ce68fa961dc5731f314630f9d3540445e0919d08041ce09: file exists 
ERRO[0000] Error refreshing volume b156962333e25cb60fbff19323174c63b5b3ca544cdb734c4e3d2c5517a7fe50: error acquiring lock 28 for volume b156962333e25cb60fbff19323174c63b5b3ca544cdb734c4e3d2c5517a7fe50: file exists 
ERRO[0000] Error refreshing volume b5233a6b4984289d500fc9fe9d81fd012a294742bc735f1558874c163ccb0cff: error acquiring lock 10 for volume b5233a6b4984289d500fc9fe9d81fd012a294742bc735f1558874c163ccb0cff: file exists 
ERRO[0000] Error refreshing volume b9ac9fa1bfb7cd26a86a753a9024511f1d23e0ed8d53e1cf7c1d62dad956c504: error acquiring lock 19 for volume b9ac9fa1bfb7cd26a86a753a9024511f1d23e0ed8d53e1cf7c1d62dad956c504: file exists 
ERRO[0000] Error refreshing volume b9b237139c82ff9b43ed2ca96ceb701127fa3c9f762d391093472f247191e814: error acquiring lock 22 for volume b9b237139c82ff9b43ed2ca96ceb701127fa3c9f762d391093472f247191e814: file exists 
ERRO[0000] Error refreshing volume bdf7fb45ffa7a41968a8d321f8fa06ec7af09b72a21fb1074c7f38ffaa0b307f: error acquiring lock 14 for volume bdf7fb45ffa7a41968a8d321f8fa06ec7af09b72a21fb1074c7f38ffaa0b307f: file exists 
ERRO[0000] Error refreshing volume c6cc8f07274b782eb4a8046197a55b2324b7b853cd76c72584ac8f10eb2f42ac: error acquiring lock 15 for volume c6cc8f07274b782eb4a8046197a55b2324b7b853cd76c72584ac8f10eb2f42ac: file exists 
ERRO[0000] Error refreshing volume c8f2b5d4818cac56d3c56a74d079a63bd981fddabdc5e9e328975d39b5fc29e5: error acquiring lock 7 for volume c8f2b5d4818cac56d3c56a74d079a63bd981fddabdc5e9e328975d39b5fc29e5: file exists 
ERRO[0000] Error refreshing volume c92533b09a4a5592aa7caabc7a5cd004ce29dffb7c104f8aaea198d3ff496c65: error acquiring lock 2 for volume c92533b09a4a5592aa7caabc7a5cd004ce29dffb7c104f8aaea198d3ff496c65: file exists 
ERRO[0000] Error refreshing volume ccd17ec7349877bffbca5e7fdb54eb9306901377c14b7351a13652f144d72cfa: error acquiring lock 5 for volume ccd17ec7349877bffbca5e7fdb54eb9306901377c14b7351a13652f144d72cfa: file exists 
ERRO[0000] Error refreshing volume d16414fa50abddfca02d97c42d5c8c2967ccf0c6653818da29789d53c29a0020: error acquiring lock 29 for volume d16414fa50abddfca02d97c42d5c8c2967ccf0c6653818da29789d53c29a0020: file exists 
ERRO[0000] Error refreshing volume d32e6a5eaad70817204f453f03eb743f062b8661dda6cc08ec1f5ca796278ff8: error acquiring lock 24 for volume d32e6a5eaad70817204f453f03eb743f062b8661dda6cc08ec1f5ca796278ff8: file exists 
ERRO[0000] Error refreshing volume d43c348b0c9a2ad4be9c36560d86818afb6f1efab86c6bc1616a3056e1cd833d: error acquiring lock 12 for volume d43c348b0c9a2ad4be9c36560d86818afb6f1efab86c6bc1616a3056e1cd833d: file exists 
ERRO[0000] Error refreshing volume d945cb9657398e13ed4324be29e75b295e89f1378256bfa958766fa8485a225e: error acquiring lock 14 for volume d945cb9657398e13ed4324be29e75b295e89f1378256bfa958766fa8485a225e: file exists 
ERRO[0000] Error refreshing volume db709b9aea8bfb98ff650608d65abfef498003dc87ca22648153a6533fd5a2f9: error acquiring lock 17 for volume db709b9aea8bfb98ff650608d65abfef498003dc87ca22648153a6533fd5a2f9: file exists 
ERRO[0000] Error refreshing volume e5ddb387d2d06e75a78aed2bab519140bd8fe9441e26e4b6ceffd6ea60d2f32c: error acquiring lock 28 for volume e5ddb387d2d06e75a78aed2bab519140bd8fe9441e26e4b6ceffd6ea60d2f32c: file exists 
ERRO[0000] Error refreshing volume e64f80b7462b3366eacb142477833a4d6a31e4d52deb2d1a19efca6c71abd96e: error acquiring lock 4 for volume e64f80b7462b3366eacb142477833a4d6a31e4d52deb2d1a19efca6c71abd96e: file exists 
ERRO[0000] Error refreshing volume e7b0988c1b406e933c072a2478699e4906685cebb64b9665a6c2d2daeeb8c0c5: error acquiring lock 30 for volume e7b0988c1b406e933c072a2478699e4906685cebb64b9665a6c2d2daeeb8c0c5: file exists 
ERRO[0000] Error refreshing volume e7c9de063540b68e5c6485ccba3859a5b83242556da0086455db15f7c823fc8f: error acquiring lock 37 for volume e7c9de063540b68e5c6485ccba3859a5b83242556da0086455db15f7c823fc8f: file exists 
ERRO[0000] Error refreshing volume eecca71cbec4e1eb9b8cde224dd58e04e210ccfe45930ee82fd3b7cb79c6503d: error acquiring lock 21 for volume eecca71cbec4e1eb9b8cde224dd58e04e210ccfe45930ee82fd3b7cb79c6503d: file exists 
ERRO[0000] Error refreshing volume f1865b4ca63793242bb6bbd66859322ef17da162805eb92ee8052b2cee7f677e: error acquiring lock 11 for volume f1865b4ca63793242bb6bbd66859322ef17da162805eb92ee8052b2cee7f677e: file exists 
ERRO[0000] Error refreshing volume f57d3746f00df346cc1bfb7a3451b2aef341951e61bc570098d0fc6f267ba025: error acquiring lock 20 for volume f57d3746f00df346cc1bfb7a3451b2aef341951e61bc570098d0fc6f267ba025: file exists 
ERRO[0000] Error refreshing volume f70da795eae370b83cbddffb507c1ee620860dbdcf8287e89c81b10a5e8e6931: error acquiring lock 17 for volume f70da795eae370b83cbddffb507c1ee620860dbdcf8287e89c81b10a5e8e6931: file exists 
ERRO[0000] Error refreshing volume fc1b4f46cf344da6d4ca7ee2cbb6074de958b46ea621b070eeb1507141cf69dc: error acquiring lock 14 for volume fc1b4f46cf344da6d4ca7ee2cbb6074de958b46ea621b070eeb1507141cf69dc: file exists 
CONTAINER ID  IMAGE                                         COMMAND               CREATED      STATUS      PORTS                     NAMES
c22870b15fe8  registry.access.redhat.com/ubi8/pause:latest  infinity              13 days ago  Created     0.0.0.0:8086->8086/tcp    961de186fff2-infra
97a6067f23d8  docker.io/library/influxdb:2.0.6              influxd               13 days ago  Created     0.0.0.0:8086->8086/tcp    influxdb_influxd
e46788e4bdd3  registry.access.redhat.com/ubi8/pause:latest  infinity              13 days ago  Created     127.0.0.1:8000->80/tcp    05334ea36ec2-infra
2ff2580d362d  docker.io/library/redis:6-alpine              redis-server          13 days ago  Created     127.0.0.1:8000->80/tcp    nextcloud-redis
c0737d27493e  docker.io/library/postgres:11-alpine          postgres              13 days ago  Created     127.0.0.1:8000->80/tcp    nextcloud-pgsql
bdec730cf70b  docker.io/library/nextcloud:22.1-apache       apache2-foregroun...  13 days ago  Created     127.0.0.1:8000->80/tcp    nextcloud-nextcloud
88d291263020  registry.access.redhat.com/ubi8/pause:latest  infinity              13 days ago  Created     127.0.0.1:9000->9000/tcp  34f9d4401dd1-infra
0273f33badc0  docker.io/thelounge/thelounge:4.2.0-alpine    thelounge start       13 days ago  Created     127.0.0.1:9000->9000/tcp  tl_thelounge
cf81e7a54cea  registry.access.redhat.com/ubi8/pause:latest  infinity              7 days ago   Created     127.0.0.1:8080->80/tcp    51ccfcf5f3c7-infra
4c0711394b54  docker.io/library/mediawiki:1.36.0            apache2-foregroun...  7 days ago   Created     127.0.0.1:8080->80/tcp    mw_mediawiki
9a850763f089  docker.io/library/mariadb:10.5-bionic         mysqld                7 days ago   Created     127.0.0.1:8080->80/tcp    mw_mariadb
a84fb02f68e8  docker.io/library/memcached:1.6.7-alpine      memcached -l 0.0....  7 days ago   Created     127.0.0.1:8080->80/tcp    mw_memcached
7f67f7e92b67  registry.access.redhat.com/ubi8/pause:latest  infinity              2 hours ago  Created     0.0.0.0:8200->8200/tcp    1d305d6f4cd9-infra
6760da60f696  docker.io/library/vault:1.6.3                 vault server -con...  2 hours ago  Created     0.0.0.0:8200->8200/tcp    vault_vault
[containers@narnia ~]$ podman ps -a
CONTAINER ID  IMAGE                                         COMMAND               CREATED      STATUS      PORTS                     NAMES
c22870b15fe8  registry.access.redhat.com/ubi8/pause:latest  infinity              13 days ago  Created     0.0.0.0:8086->8086/tcp    961de186fff2-infra
97a6067f23d8  docker.io/library/influxdb:2.0.6              influxd               13 days ago  Created     0.0.0.0:8086->8086/tcp    influxdb_influxd
e46788e4bdd3  registry.access.redhat.com/ubi8/pause:latest  infinity              13 days ago  Created     127.0.0.1:8000->80/tcp    05334ea36ec2-infra
2ff2580d362d  docker.io/library/redis:6-alpine              redis-server          13 days ago  Created     127.0.0.1:8000->80/tcp    nextcloud-redis
c0737d27493e  docker.io/library/postgres:11-alpine          postgres              13 days ago  Created     127.0.0.1:8000->80/tcp    nextcloud-pgsql
bdec730cf70b  docker.io/library/nextcloud:22.1-apache       apache2-foregroun...  13 days ago  Created     127.0.0.1:8000->80/tcp    nextcloud-nextcloud
88d291263020  registry.access.redhat.com/ubi8/pause:latest  infinity              13 days ago  Created     127.0.0.1:9000->9000/tcp  34f9d4401dd1-infra
0273f33badc0  docker.io/thelounge/thelounge:4.2.0-alpine    thelounge start       13 days ago  Created     127.0.0.1:9000->9000/tcp  tl_thelounge
cf81e7a54cea  registry.access.redhat.com/ubi8/pause:latest  infinity              7 days ago   Created     127.0.0.1:8080->80/tcp    51ccfcf5f3c7-infra
4c0711394b54  docker.io/library/mediawiki:1.36.0            apache2-foregroun...  7 days ago   Created     127.0.0.1:8080->80/tcp    mw_mediawiki
9a850763f089  docker.io/library/mariadb:10.5-bionic         mysqld                7 days ago   Created     127.0.0.1:8080->80/tcp    mw_mariadb
a84fb02f68e8  docker.io/library/memcached:1.6.7-alpine      memcached -l 0.0....  7 days ago   Created     127.0.0.1:8080->80/tcp    mw_memcached
7f67f7e92b67  registry.access.redhat.com/ubi8/pause:latest  infinity              2 hours ago  Created     0.0.0.0:8200->8200/tcp    1d305d6f4cd9-infra
6760da60f696  docker.io/library/vault:1.6.3                 vault server -con...  2 hours ago  Created     0.0.0.0:8200->8200/tcp    vault_vault
[containers@narnia ~]$ 

Needless to say, the first podman ps -a is before triggering systemd-tmpfiles --clean, the second is just after that command, the third is once "the dust has settled".

I'd say this is a smoking gun.

@Luap99
Copy link
Member

Luap99 commented Sep 12, 2021

This is the same issue as #11478.
You need to make sure /tmp/run-1004/libpod/tmp is not deleted.

What you really need to do is run podman system reset (this will delete everything), then make sure you login properly into this user account, do not use su or sudo! You have to use ssh user@localhostor machinectl shell user@. With a proper systemd session you should be able to use podman without any problems. If you now run podman info you should have /run/user/1004/containers as runroot directory.

@Luap99 Luap99 closed this as completed Sep 12, 2021
@esantoro
Copy link
Author

I did perform podman system reset, logged via machinectl and performed a podman info and now runRoot shows up as runRoot: /run/user/1004/containers.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 21, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 21, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

No branches or pull requests

3 participants