Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[v1.8] /kind bug Unable to access application publically/outside after exposing port with podman #5167

Closed
barseghyanartur opened this issue Feb 11, 2020 · 35 comments · Fixed by #5245
Labels
locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. rootless

Comments

@barseghyanartur
Copy link

barseghyanartur commented Feb 11, 2020

BUG REPORT

Description

Actually, exact copy of #4715

I have 6 containers running. podman ps tells me they are.
netstat -ntlp does not include ports allocated by the containers.
However, each of them (internally) has access to all others, but not from outside the container.
Thus, if my API runs on port 8000, I can't access it, but if I go into any of the containers, I do.

Steps to reproduce the issue:

podman-compose up

Describe the results you received:

Containers running, but ports are not accessible from outside containers.

Describe the results you expected:

I expect ports to be accessible from outside containers.

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

Version:            1.8.0
RemoteAPI Version:  1
Go Version:         go1.13.6
OS/Arch:            linux/amd64

Output of podman info --debug:

$ podman info --debug
debug:
  compiler: gc
  git commit: ""
  go version: go1.13.6
  podman version: 1.8.0
host:
  BuildahVersion: 1.13.1
  CgroupVersion: v2
  Conmon:
    package: conmon-2.0.10-2.fc31.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.10, commit: 6b526d9888abb86b9e7de7dfdeec0da98ad32ee0'
  Distribution:
    distribution: fedora
    version: "31"
  IDMappings:
    gidmap:
    - container_id: 0
      host_id: 1001
      size: 1
    - container_id: 1
      host_id: 2328224
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1001
      size: 1
    - container_id: 1
      host_id: 2328224
      size: 65536
  MemFree: 393715712
  MemTotal: 16372674560
  OCIRuntime:
    name: crun
    package: crun-0.12.1-1.fc31.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 0.12.1
      commit: df5f2b2369b3d9f36d175e1183b26e5cee55dd0a
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  SwapFree: 412708864
  SwapTotal: 943714304
  arch: amd64
  cpus: 8
  eventlogger: journald
  hostname: localhost.localdomain
  kernel: 5.4.17-200.fc31.x86_64
  os: linux
  rootless: true
  slirp4netns:
    Executable: /usr/bin/slirp4netns
    Package: slirp4netns-0.4.0-20.1.dev.gitbbd6f25.fc31.x86_64
    Version: |-
      slirp4netns version 0.4.0-beta.3+dev
      commit: bbd6f25c70d5db2a1cd3bfb0416a8db99a75ed7e
  uptime: 56m 40.69s
registries:
  search:
  - docker.io
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - registry.centos.org
  - quay.io
store:
  ConfigFile: /home/user.local/.config/containers/storage.conf
  ContainerStore:
    number: 20
  GraphDriverName: overlay
  GraphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-0.7.5-2.fc31.x86_64
      Version: |-
        fusermount3 version: 3.6.2
        fuse-overlayfs: version 0.7.5
        FUSE library version 3.6.2
        using FUSE kernel interface version 7.29
  GraphRoot: /home/user.local/.local/share/containers/storage
  GraphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  ImageStore:
    number: 19
  RunRoot: /run/user/1001
  VolumePath: /home/user.local/.local/share/containers/storage/volumes

Package info (e.g. output of rpm -q podman or apt list podman):

podman-1.8.0-2.fc31.x86_64

Additional environment details (AWS, VirtualBox, physical, etc.):

Fedora 31

@mheon
Copy link
Member

mheon commented Feb 11, 2020

From podman-compose I assume you're running rootless

@mheon
Copy link
Member

mheon commented Feb 11, 2020

@AkihiroSuda PTAL - potential issue with the port forwarder?

@barseghyanartur
Copy link
Author

@mheon:

Yep.

@mheon
Copy link
Member

mheon commented Feb 11, 2020

Any chance you can provide the Podman commands in use? Can you verify that the pod has ports attached to it (podman pod inspect ought to print that)?

@AkihiroSuda
Copy link
Collaborator

Also could you try podman --log-level debug run -it ...

@barseghyanartur
Copy link
Author

@mheon:

podman pod inspect ec2a1221b120
{
     "Config": {
          "id": "ec2a1221b120b0dbfcd7e8de54111be4404e889c3e89af17239c8da74d06f126",
          "name": "my_project",
          "hostname": "my_project",
          "labels": {
               
          },
          "cgroupParent": "/libpod_parent",
          "sharesCgroup": true,
          "sharesNet": true,
          "infraConfig": {
               "makeInfraContainer": true,
               "infraPortBindings": [
                    {
                         "hostPort": 9300,
                         "containerPort": 9300,
                         "protocol": "tcp",
                         "hostIP": ""
                    },
                    {
                         "hostPort": 8888,
                         "containerPort": 8888,
                         "protocol": "tcp",
                         "hostIP": "127.0.0.1"
                    },
                    {
                         "hostPort": 9200,
                         "containerPort": 9200,
                         "protocol": "tcp",
                         "hostIP": ""
                    },
                    {
                         "hostPort": 5601,
                         "containerPort": 5601,
                         "protocol": "tcp",
                         "hostIP": ""
                    },
                    {
                         "hostPort": 3306,
                         "containerPort": 3306,
                         "protocol": "tcp",
                         "hostIP": "127.0.0.1"
                    },
                    {
                         "hostPort": 8000,
                         "containerPort": 8000,
                         "protocol": "tcp",
                         "hostIP": "127.0.0.1"
                    },
                    {
                         "hostPort": 8080,
                         "containerPort": 8080,
                         "protocol": "tcp",
                         "hostIP": ""
                    },
                    {
                         "hostPort": 5044,
                         "containerPort": 5044,
                         "protocol": "tcp",
                         "hostIP": ""
                    }
               ]
          },
          "created": "2020-02-11T16:08:30.267960766+01:00",
          "lockID": 0
     },
     "State": {
          "cgroupPath": "/libpod_parent/ec2a1221b120b0dbfcd7e8de54111be4404e889c3e89af17239c8da74d06f126",
          "infraContainerID": "e5fba11063281cef4192211ac005af0a64b556a7d5c73df96d1e6590c6d2ef04"
     },
     "Containers": [
          {
               "id": "141d14e2a761e102a8bb41ddff2f8f292f29874cba6fa25469e7c581de9ec8b4",
               "state": "running"
          },
          {
               "id": "2f654957ed82ca6209a017695cfe4377956e39a93dbc17449b595d1a724dbfc5",
               "state": "running"
          },
          {
               "id": "37f9613cb7fbd9f3298aa7224d40d817b02d87d3480bde0e807c824ffc0278a5",
               "state": "running"
          },
          {
               "id": "50a7744234c818dc606ec92114ecda86c94b8d391323156ce4a0b2601e6c2013",
               "state": "running"
          },
          {
               "id": "6cf216f02c8f82b3c370c80d54202a4f4b405068a1c1e731ebbfe026821651e0",
               "state": "running"
          },
          {
               "id": "a66d18aad7b8d2969684b2eac05f070ef67799c11e25c9e297bee1820cd8267e",
               "state": "exited"
          },
          {
               "id": "b3f61f82be937d54d5d5fa5020ba98e3dcda51226eeabdba0bee12df849db71b",
               "state": "running"
          },
          {
               "id": "c1ab5b0ca7cbb48aa3ab324be13f2d9affbd12321822f620c519236ee55f4fd0",
               "state": "running"
          },
          {
               "id": "e5fba11063281cef4192211ac005af0a64b556a7d5c73df96d1e6590c6d2ef04",
               "state": "running"
          },
          {
               "id": "ee8e89f6ee0bcb4c19564f6afc934a96703d26daff92601fc5bcb848f686d6d5",
               "state": "running"
          }
     ]
}

@barseghyanartur
Copy link
Author

@AkihiroSuda:

$ podman exec -it --log-level=debug my_project_elasticsearch_1 /bin/bash
DEBU[0000] Reading configuration file "/home/user.local/.config/containers/libpod.conf" 
DEBU[0000] Merged system config "/home/user.local/.config/containers/libpod.conf": &{{false false false true true true} 0 {   [] [] []} /home/user.local/.local/share/containers/storage/volumes docker://  /usr/bin/crun map[runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc]] [] [] [] [/usr/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon] [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] cgroupfs /usr/libexec/podman/catatonit /home/user.local/.local/share/containers/storage/libpod /run/user/1001/libpod/tmp -1 false /etc/cni/net.d/ [/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin]  []   k8s.gcr.io/pause:3.1 /pause true true  2048  journald  ctrl-p,ctrl-q false false} 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /home/user.local/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /home/user.local/.local/share/containers/storage 
DEBU[0000] Using run root /run/user/1001                
DEBU[0000] Using static dir /home/user.local/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1001/libpod/tmp      
DEBU[0000] Using volume path /home/user.local/.local/share/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] No store required. Not opening container store. 
DEBU[0000] Initializing event backend journald          
DEBU[0000] using runtime "/usr/bin/runc"                
DEBU[0000] using runtime "/usr/bin/crun"                
INFO[0000] running as rootless                          
DEBU[0000] Reading configuration file "/home/user.local/.config/containers/libpod.conf" 
DEBU[0000] Merged system config "/home/user.local/.config/containers/libpod.conf": &{{false false false true true true} 0 {   [] [] []} /home/user.local/.local/share/containers/storage/volumes docker://  /usr/bin/crun map[runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc]] [] [] [] [/usr/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon] [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] cgroupfs /usr/libexec/podman/catatonit /home/user.local/.local/share/containers/storage/libpod /run/user/1001/libpod/tmp -1 false /etc/cni/net.d/ [/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin]  []   k8s.gcr.io/pause:3.1 /pause true true  2048  journald  ctrl-p,ctrl-q false false} 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /home/user.local/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /home/user.local/.local/share/containers/storage 
DEBU[0000] Using run root /run/user/1001                
DEBU[0000] Using static dir /home/user.local/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1001/libpod/tmp      
DEBU[0000] Using volume path /home/user.local/.local/share/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] No store required. Not opening container store. 
DEBU[0000] Initializing event backend journald          
DEBU[0000] using runtime "/usr/bin/runc"                
DEBU[0000] using runtime "/usr/bin/crun"                
DEBU[0000] Handling terminal attach                     
DEBU[0000] Creating new exec session in container 6cf216f02c8f82b3c370c80d54202a4f4b405068a1c1e731ebbfe026821651e0 with session id c339639645cd073d75843afc5ca64e0decc13223672399f3585ec9dc7e1688f3 
DEBU[0000] /usr/bin/conmon messages will be logged to syslog 
DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -c 6cf216f02c8f82b3c370c80d54202a4f4b405068a1c1e731ebbfe026821651e0 -u c339639645cd073d75843afc5ca64e0decc13223672399f3585ec9dc7e1688f3 -r /usr/bin/crun -b /home/user.local/.local/share/containers/storage/overlay-containers/6cf216f02c8f82b3c370c80d54202a4f4b405068a1c1e731ebbfe026821651e0/userdata/c339639645cd073d75843afc5ca64e0decc13223672399f3585ec9dc7e1688f3 -p /home/user.local/.local/share/containers/storage/overlay-containers/6cf216f02c8f82b3c370c80d54202a4f4b405068a1c1e731ebbfe026821651e0/userdata/c339639645cd073d75843afc5ca64e0decc13223672399f3585ec9dc7e1688f3/exec_pid -l k8s-file:/home/user.local/.local/share/containers/storage/overlay-containers/6cf216f02c8f82b3c370c80d54202a4f4b405068a1c1e731ebbfe026821651e0/userdata/c339639645cd073d75843afc5ca64e0decc13223672399f3585ec9dc7e1688f3/exec_log --exit-dir /home/user.local/.local/share/containers/storage/overlay-containers/6cf216f02c8f82b3c370c80d54202a4f4b405068a1c1e731ebbfe026821651e0/userdata/c339639645cd073d75843afc5ca64e0decc13223672399f3585ec9dc7e1688f3/exit --socket-dir-path /run/user/1001/libpod/tmp/socket --log-level debug --syslog -t -i -e --exec-attach --exec-process-spec /home/user.local/.local/share/containers/storage/overlay-containers/6cf216f02c8f82b3c370c80d54202a4f4b405068a1c1e731ebbfe026821651e0/userdata/c339639645cd073d75843afc5ca64e0decc13223672399f3585ec9dc7e1688f3/exec-process-437292715]"
DEBU[0000] Attaching to container 6cf216f02c8f82b3c370c80d54202a4f4b405068a1c1e731ebbfe026821651e0 exec session c339639645cd073d75843afc5ca64e0decc13223672399f3585ec9dc7e1688f3 
DEBU[0000] connecting to socket /run/user/1001/libpod/tmp/socket/c339639645cd073d75843afc5ca64e0decc13223672399f3585ec9dc7e1688f3/attach 
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

                                                                          DEBU[0000] Received: 0                                  
DEBU[0000] Received a resize event: {Width:212 Height:44} 
DEBU[0000] Received: 28854                              
[conmon:d]: exec with attach is waiting for start message from parent
                                                                     [conmon:d]: exec with attach got start message from parent
                                                                                                                               DEBU[0000] Successfully started exec session c339639645cd073d75843afc5ca64e0decc13223672399f3585ec9dc7e1688f3 in container 6cf216f02c8f82b3c370c80d54202a4f4b405068a1c1e731ebbfe026821651e0 
[root@6cf216f02c8f elasticsearch]# curl http://localhost:9200/
{"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}[root@6cf216f02c8f elasticsearch]#
[root@6cf216f02c8f elasticsearch]# curl --request POST \
>   --url 'http://localhost:8000/api/document/' \
>   --header 'authorization: Bearer abcd1234.efgh5678.ijkl1234-mnop5678' \
>   --header 'content-type: application/json' \
>   --data '{
>   "id": "x330932146"
> }'

Note, that inside the container it all works as expected and containers can communicate with each other. In the given example I did connect to my Django container from within Elasticsearch container.

@AkihiroSuda
Copy link
Collaborator

needs podman --log-level debug run -it -p ****:****.., not podman exec

@AkihiroSuda
Copy link
Collaborator

Also ps auxw on the host if possible

@barseghyanartur
Copy link
Author

@AkihiroSuda:

$ ps auxw
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root           1  0.2  0.0 171856 11772 ?        Ss   08:55   0:07 /usr/lib/systemd/systemd --switched-root --system --deserialize 30
root           2  0.0  0.0      0     0 ?        S    08:55   0:00 [kthreadd]
root           3  0.0  0.0      0     0 ?        I<   08:55   0:00 [rcu_gp]
root           4  0.0  0.0      0     0 ?        I<   08:55   0:00 [rcu_par_gp]
root           9  0.0  0.0      0     0 ?        I<   08:55   0:00 [mm_percpu_wq]
root          10  0.0  0.0      0     0 ?        S    08:55   0:00 [ksoftirqd/0]
root          11  0.1  0.0      0     0 ?        I    08:55   0:04 [rcu_sched]
root          12  0.0  0.0      0     0 ?        S    08:55   0:00 [migration/0]
root          13  0.0  0.0      0     0 ?        S    08:55   0:00 [cpuhp/0]
root          14  0.0  0.0      0     0 ?        S    08:55   0:00 [cpuhp/1]
root          15  0.0  0.0      0     0 ?        S    08:55   0:00 [migration/1]
root          16  0.0  0.0      0     0 ?        S    08:55   0:00 [ksoftirqd/1]
root          18  0.0  0.0      0     0 ?        I<   08:55   0:00 [kworker/1:0H-kblockd]
root          19  0.0  0.0      0     0 ?        S    08:55   0:00 [cpuhp/2]
root          20  0.0  0.0      0     0 ?        S    08:55   0:00 [migration/2]
root          21  0.0  0.0      0     0 ?        S    08:55   0:01 [ksoftirqd/2]
root          24  0.0  0.0      0     0 ?        S    08:55   0:00 [cpuhp/3]
root          25  0.0  0.0      0     0 ?        S    08:55   0:00 [migration/3]
root          26  0.0  0.0      0     0 ?        S    08:55   0:00 [ksoftirqd/3]
root          28  0.0  0.0      0     0 ?        I<   08:55   0:00 [kworker/3:0H-events_highpri]
root          29  0.0  0.0      0     0 ?        S    08:55   0:00 [cpuhp/4]
root          30  0.0  0.0      0     0 ?        S    08:55   0:00 [migration/4]
root          31  0.0  0.0      0     0 ?        S    08:55   0:00 [ksoftirqd/4]
root          34  0.0  0.0      0     0 ?        S    08:55   0:00 [cpuhp/5]
root          35  0.0  0.0      0     0 ?        S    08:55   0:00 [migration/5]
root          36  0.0  0.0      0     0 ?        S    08:55   0:00 [ksoftirqd/5]
root          38  0.0  0.0      0     0 ?        I<   08:55   0:00 [kworker/5:0H-kblockd]
root          39  0.0  0.0      0     0 ?        S    08:55   0:00 [cpuhp/6]
root          40  0.0  0.0      0     0 ?        S    08:55   0:00 [migration/6]
root          41  0.0  0.0      0     0 ?        S    08:55   0:00 [ksoftirqd/6]
root          43  0.0  0.0      0     0 ?        I<   08:55   0:00 [kworker/6:0H-events_highpri]
root          44  0.0  0.0      0     0 ?        S    08:55   0:00 [cpuhp/7]
root          45  0.0  0.0      0     0 ?        S    08:55   0:00 [migration/7]
root          46  0.0  0.0      0     0 ?        S    08:55   0:00 [ksoftirqd/7]
root          49  0.0  0.0      0     0 ?        S    08:55   0:00 [kdevtmpfs]
root          50  0.0  0.0      0     0 ?        I<   08:55   0:00 [netns]
root          51  0.0  0.0      0     0 ?        S    08:55   0:00 [rcu_tasks_kthre]
root          52  0.0  0.0      0     0 ?        S    08:55   0:00 [kauditd]
root          53  0.0  0.0      0     0 ?        S    08:55   0:00 [oom_reaper]
root          54  0.0  0.0      0     0 ?        I<   08:55   0:00 [writeback]
root          55  0.0  0.0      0     0 ?        S    08:55   0:00 [kcompactd0]
root          56  0.0  0.0      0     0 ?        SN   08:55   0:00 [ksmd]
root          57  0.0  0.0      0     0 ?        SN   08:55   0:00 [khugepaged]
root          85  0.0  0.0      0     0 ?        I<   08:55   0:00 [cryptd]
root         168  0.0  0.0      0     0 ?        I<   08:55   0:00 [kintegrityd]
root         169  0.0  0.0      0     0 ?        I<   08:55   0:00 [kblockd]
root         170  0.0  0.0      0     0 ?        I<   08:55   0:00 [blkcg_punt_bio]
root         172  0.0  0.0      0     0 ?        I<   08:55   0:00 [tpm_dev_wq]
root         173  0.0  0.0      0     0 ?        I<   08:55   0:00 [ata_sff]
root         174  0.0  0.0      0     0 ?        I<   08:55   0:00 [md]
root         175  0.0  0.0      0     0 ?        I<   08:55   0:00 [edac-poller]
root         176  0.0  0.0      0     0 ?        I<   08:55   0:00 [devfreq_wq]
root         178  0.0  0.0      0     0 ?        S    08:55   0:00 [watchdogd]
root         179  0.5  0.0      0     0 ?        S    08:55   0:21 [kswapd0]
root         182  0.0  0.0      0     0 ?        I<   08:55   0:00 [kthrotld]
root         183  0.0  0.0      0     0 ?        I<   08:55   0:00 [acpi_thermal_pm]
root         184  0.0  0.0      0     0 ?        S    08:55   0:00 [scsi_eh_0]
root         185  0.0  0.0      0     0 ?        I<   08:55   0:00 [scsi_tmf_0]
root         186  0.0  0.0      0     0 ?        S    08:55   0:00 [scsi_eh_1]
root         187  0.0  0.0      0     0 ?        I<   08:55   0:00 [scsi_tmf_1]
root         190  0.0  0.0      0     0 ?        I<   08:55   0:00 [dm_bufio_cache]
root         192  0.0  0.0      0     0 ?        I<   08:55   0:00 [ipv6_addrconf]
root         200  0.0  0.0      0     0 ?        I<   08:55   0:00 [kstrp]
root         262  0.0  0.0      0     0 ?        I<   08:55   0:00 [kworker/u17:0-hci0]
root         278  0.0  0.0      0     0 ?        I<   08:55   0:00 [kworker/1:1H-events_highpri]
root         492  0.0  0.0      0     0 ?        S    08:55   0:00 [irq/51-DELL0808]
root         527  0.0  0.0      0     0 ?        I<   08:55   0:00 [bcache]
root         528  0.0  0.0      0     0 ?        I<   08:55   0:00 [bch_journal]
root         538  0.0  0.0      0     0 ?        I<   08:55   0:00 [bioset]
root         542  0.0  0.0      0     0 ?        S    08:55   0:00 [bcache_status_u]
root         750  0.0  0.0      0     0 ?        I<   08:55   0:00 [kdmflush]
root         755  0.0  0.0      0     0 ?        I<   08:55   0:00 [kcryptd_io/253:]
root         756  0.0  0.0      0     0 ?        I<   08:55   0:00 [kcryptd/253:0]
root         757  0.1  0.0      0     0 ?        S    08:55   0:04 [dmcrypt_write/2]
root         827  0.0  0.0      0     0 ?        I<   08:55   0:00 [kdmflush]
root         835  0.0  0.0      0     0 ?        I<   08:55   0:00 [kdmflush]
root         851  0.2  0.0      0     0 ?        I    08:55   0:11 [kworker/u16:5-kcryptd/253:0]
root         852  0.2  0.0      0     0 ?        I    08:55   0:11 [kworker/u16:6-events_unbound]
root         853  0.0  0.0      0     0 ?        I<   08:55   0:00 [kworker/6:1H-events_highpri]
root         854  0.0  0.0      0     0 ?        S    08:55   0:01 [jbd2/dm-1-8]
root         855  0.0  0.0      0     0 ?        I<   08:55   0:00 [ext4-rsv-conver]
root         917  0.2  0.0      0     0 ?        I    08:55   0:11 [kworker/u16:7-flush-253:1]
root         919  0.0  0.0      0     0 ?        I<   08:55   0:00 [kworker/4:1H-events_highpri]
root         921  0.0  0.0      0     0 ?        I<   08:55   0:00 [kworker/7:1H-events_highpri]
root         961  0.0  0.2 119820 44308 ?        Ss   08:55   0:01 /usr/lib/systemd/systemd-journald
root         988  0.0  0.0  35412 11128 ?        Ss   08:55   0:01 /usr/lib/systemd/systemd-udevd
root        1106  0.0  0.0      0     0 ?        S    08:55   0:00 [irq/16-mei_me]
root        1128  0.0  0.0      0     0 ?        I<   08:55   0:00 [cfg80211]
root        1136  0.0  0.0      0     0 ?        I<   08:55   0:00 [kworker/u17:2-hci0]
root        1159  0.0  0.0      0     0 ?        I<   08:55   0:00 [ath10k_wq]
root        1160  0.0  0.0      0     0 ?        I<   08:55   0:00 [ath10k_aux_wq]
root        1206  0.0  0.0      0     0 ?        S<   08:55   0:00 [loop0]
root        1213  0.0  0.0      0     0 ?        S<   08:55   0:00 [loop1]
root        1220  0.0  0.0      0     0 ?        S    08:55   0:00 [jbd2/sda1-8]
root        1221  0.0  0.0      0     0 ?        I<   08:55   0:00 [ext4-rsv-conver]
root        1224  0.0  0.0      0     0 ?        S<   08:55   0:00 [loop2]
root        1227  0.0  0.0      0     0 ?        S<   08:55   0:00 [loop3]
root        1228  0.0  0.0      0     0 ?        S<   08:55   0:00 [loop4]
root        1229  0.0  0.0      0     0 ?        S<   08:55   0:00 [loop5]
root        1230  0.0  0.0      0     0 ?        S<   08:55   0:00 [loop6]
root        1231  0.0  0.0      0     0 ?        S<   08:55   0:00 [loop7]
root        1232  0.0  0.0      0     0 ?        S<   08:55   0:00 [loop8]
root        1233  0.0  0.0      0     0 ?        S<   08:55   0:00 [loop9]
root        1240  0.0  0.0  90448  1948 ?        S<sl 08:55   0:00 /sbin/auditd
root        1242  0.0  0.0   6552  3048 ?        S<   08:55   0:00 /usr/sbin/sedispatch
root        1253  0.0  0.0      0     0 ?        I<   08:55   0:00 [rpciod]
root        1254  0.0  0.0      0     0 ?        I<   08:55   0:00 [xprtiod]
root        1263  0.0  0.0 313480  9044 ?        Ssl  08:55   0:00 /usr/sbin/ModemManager
root        1264  0.0  0.0 445740  6912 ?        Ssl  08:55   0:00 /usr/libexec/accounts-daemon
avahi       1265  0.0  0.0   6876  4044 ?        Ss   08:55   0:03 avahi-daemon: running [linux-4.local]
root        1266  0.0  0.0   8404  4788 ?        Ss   08:55   0:00 /usr/libexec/bluetooth/bluetoothd
root        1269  0.0  0.2 335284 34300 ?        Ssl  08:55   0:01 /usr/bin/python3 /usr/sbin/firewalld --nofork --nopid
root        1272  0.0  0.0 231788  6292 ?        Ssl  08:55   0:00 /usr/sbin/iio-sensor-proxy
root        1275  0.0  0.0   3032  1912 ?        Ss   08:55   0:00 /usr/sbin/mcelog --ignorenodev --daemon --foreground
root        1277  0.4  0.0 312032  5992 ?        Ssl  08:55   0:17 /sbin/rngd -f
rtkit       1280  0.0  0.0 152920  3168 ?        SNsl 08:55   0:00 /usr/libexec/rtkit-daemon
root        1284  0.0  0.0 442856  5508 ?        Ssl  08:55   0:00 /usr/libexec/switcheroo-control
chrony      1289  0.0  0.0  78780  2700 ?        S    08:55   0:00 /usr/sbin/chronyd
root        1293  0.0  0.0  17480  5624 ?        Ss   08:55   0:00 /usr/lib/systemd/systemd-machined
dbus        1303  0.0  0.0 268880  3312 ?        Ss   08:55   0:00 /usr/bin/dbus-broker-launch --scope system --audit
root        1334  0.0  0.0 465636 11628 ?        Ssl  08:55   0:00 /usr/sbin/abrtd -d -s
root        1338  0.0  0.0 263832  2996 ?        Ssl  08:55   0:00 /usr/sbin/gssproxy -D
dbus        1353  0.0  0.0  10012  7216 ?        S    08:55   0:01 dbus-broker --log 4 --controller 9 --machine-id abcd1234 --max-bytes 536870912 --max-fds 4096 --max-matches 131072 --audi
root        1372  0.0  0.0   5152  2672 ?        SNs  08:55   0:00 /usr/sbin/alsactl -s -n 19 -c -E ALSA_CONFIG_PATH=/etc/alsa/alsactl.conf --initfile=/lib/alsa/init/00main rdaemon
avahi       1374  0.0  0.0   6428   200 ?        S    08:55   0:00 avahi-daemon: chroot helper
root        1377  0.0  0.0 948992 13560 ?        Ss   08:55   0:00 /usr/bin/abrt-dump-journal-core -D -T -f -e
root        1378  0.0  0.1 989548 16292 ?        Ss   08:55   0:00 /usr/bin/abrt-dump-journal-oops -fxtD
root        1382  0.0  0.1 1063680 17808 ?       Ss   08:55   0:00 /usr/bin/abrt-dump-journal-xorg -fxtD
polkitd     1384  0.0  0.1 1929524 18364 ?       Ssl  08:55   0:01 /usr/lib/polkit-1/polkitd --no-debug
root        1425  0.0  0.0  19668  8292 ?        Ss   08:55   0:00 /usr/lib/systemd/systemd-logind
root        1464  0.0  0.1 949660 17208 ?        Ssl  08:55   0:03 /usr/sbin/NetworkManager --no-daemon
root        1496  0.0  0.0 239384  9568 ?        Ss   08:55   0:00 /usr/sbin/cupsd -l
root        1497  0.0  0.0 925676  5252 ?        Ssl  08:55   0:00 /usr/libexec/docker/docker-containerd-current --listen unix:///run/containerd.sock --shim /usr/libexec/docker/docker-containerd-shim-current --st
root        1515  0.0  0.0   3680  2092 ?        Ss   08:55   0:00 /usr/sbin/atd -f
root        1520  0.0  0.0 217468  2768 ?        Ss   08:55   0:00 /usr/sbin/crond -n
root        1522  0.0  0.0 446432  6932 ?        Ssl  08:55   0:00 /usr/sbin/gdm
root        1549  0.0  0.0 378440  8712 ?        Sl   08:55   0:00 gdm-session-worker [pam/gdm-launch-environment]
colord      1556  0.0  0.0 450776 10484 ?        Ssl  08:55   0:00 /usr/libexec/colord
user.l+    1559  0.0  0.0  20504 10052 ?        Ss   08:55   0:00 /usr/lib/systemd/systemd --user
user.l+    1602  0.0  0.0  27380  3220 ?        S    08:55   0:00 (sd-pam)
gdm         1634  0.0  0.0  20248  9696 ?        Ss   08:55   0:00 /usr/lib/systemd/systemd --user
gdm         1648  0.0  0.0  27380  2964 ?        S    08:55   0:00 (sd-pam)
gdm         1750  0.0  0.0 371244  5584 tty1     Ssl+ 08:55   0:00 /usr/libexec/gdm-x-session gnome-session --autostart /usr/share/gdm/greeter/autostart
gdm         1755  0.0  0.1 278800 20692 tty1     Sl+  08:55   0:00 /usr/libexec/Xorg vt1 -displayfd 3 -auth /run/user/42/gdm/Xauthority -background none -noreset -keeptty -verbose 3
root        1852  0.0  0.0  12608  7980 ?        Ss   08:55   0:00 /usr/sbin/wpa_supplicant -c /etc/wpa_supplicant/wpa_supplicant.conf -u -s
dnsmasq     2033  0.0  0.0   8496  1588 ?        S    08:55   0:00 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
root        2034  0.0  0.0   8392   324 ?        S    08:55   0:00 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
gdm         2120  0.0  0.0 268456  3336 ?        Ss   08:55   0:00 /usr/bin/dbus-broker-launch --scope user
gdm         2121  0.0  0.0   5836  3264 ?        S    08:55   0:00 dbus-broker --log 4 --controller 10 --machine-id abcd1234 --max-bytes 100000000000000 --max-fds 25000000000000 --max-matc
gdm         2123  0.0  0.0 466992 11848 tty1     Sl+  08:55   0:00 /usr/libexec/gnome-session-binary --autostart /usr/share/gdm/greeter/autostart
gdm         2138  0.0  0.0 305320  6056 ?        Ssl  08:55   0:00 /usr/libexec/at-spi-bus-launcher
gdm         2144  0.0  0.0 268376  3372 ?        S    08:55   0:00 /usr/bin/dbus-broker-launch --config-file=/usr/share/defaults/at-spi2/accessibility.conf --scope user
gdm         2147  0.0  0.0   5096  2900 ?        S    08:55   0:00 dbus-broker --log 4 --controller 9 --machine-id abcd1234 --max-bytes 100000000000000 --max-fds 6400000 --max-matches 5000
gdm         2158  0.0  0.0 297288  3836 ?        Ssl  08:55   0:00 /usr/libexec/gnome-session-ctl --monitor
gdm         2159  0.0  0.0 968260  9696 ?        S<sl 08:55   0:00 /usr/bin/pulseaudio --daemonize=no
gdm         2161  0.0  0.0 688452 12200 ?        Ssl  08:55   0:00 /usr/libexec/gnome-session-binary --systemd-service --session=gnome-login
gdm         2168  0.0  0.9 2879504 157640 ?      Ssl  08:55   0:03 /usr/bin/gnome-shell
gdm         2265  0.0  0.0 442848  4292 ?        Ssl  08:55   0:00 /usr/libexec/xdg-permission-store
root        2270  0.0  0.0 455964  8032 ?        Ssl  08:55   0:00 /usr/libexec/upowerd
gdm         2286  0.0  0.0 592576  8328 ?        Sl   08:55   0:00 ibus-daemon --panel disable -r --xim
gdm         2291  0.0  0.0 443816  6304 ?        Sl   08:55   0:00 /usr/libexec/ibus-dconf
gdm         2293  0.0  0.1 396060 18656 ?        Sl   08:55   0:00 /usr/libexec/ibus-x11 --kill-daemon
gdm         2299  0.0  0.0 443552  5388 ?        Ssl  08:55   0:00 /usr/libexec/ibus-portal
gdm         2309  0.0  0.0 156036  5420 ?        Ssl  08:55   0:00 /usr/libexec/dconf-service
gdm         2310  0.0  0.0 160784  6168 ?        Ssl  08:55   0:00 /usr/libexec/at-spi2-registryd --use-gnome-session
gdm         2343  0.0  0.0 518296  6032 ?        Ssl  08:55   0:00 /usr/libexec/gsd-a11y-settings
gdm         2344  0.0  0.1 786620 19076 ?        Ssl  08:55   0:00 /usr/libexec/gsd-color
gdm         2345  0.0  0.1 543432 16644 ?        Ssl  08:55   0:00 /usr/libexec/gsd-keyboard
gdm         2347  0.0  0.1 1032320 19476 ?       Ssl  08:55   0:00 /usr/libexec/gsd-media-keys
gdm         2349  0.0  0.1 618504 18172 ?        Ssl  08:55   0:00 /usr/libexec/gsd-power
gdm         2351  0.0  0.0 458588 11068 ?        Ssl  08:55   0:00 /usr/libexec/gsd-print-notifications
gdm         2352  0.0  0.0 665660  5804 ?        Ssl  08:55   0:00 /usr/libexec/gsd-rfkill
gdm         2357  0.0  0.0 597952  9984 ?        Ssl  08:55   0:00 /usr/libexec/gsd-smartcard
gdm         2359  0.0  0.0 526152  8296 ?        Ssl  08:55   0:00 /usr/libexec/gsd-sound
gdm         2366  0.0  0.1 543576 16852 ?        Ssl  08:55   0:00 /usr/libexec/gsd-wacom
gdm         2375  0.0  0.0 522648  7060 ?        Ssl  08:55   0:00 /usr/libexec/gsd-wwan
gdm         2376  0.0  0.1 544728 18116 ?        Ssl  08:55   0:00 /usr/libexec/gsd-xsettings
gdm         2431  0.0  0.0 369876  6232 ?        Sl   08:55   0:00 /usr/libexec/ibus-engine-simple
gdm         2462  0.0  0.0 549532 13996 ?        Sl   08:55   0:00 /usr/libexec/gsd-printer
root        2509  0.0  0.0 526420 10552 ?        Sl   08:55   0:00 gdm-session-worker [pam/gdm-password]
user.l+    2521  0.0  0.0 594896  6404 ?        Sl   08:55   0:00 /usr/bin/gnome-keyring-daemon --daemonize --login
user.l+    2539  0.0  0.0 371244  5984 tty2     Ssl+ 08:55   0:00 /usr/libexec/gdm-x-session --run-script /usr/bin/gnome-session
user.l+    2541 19.2  1.0 481876 174400 tty2    Rl+  08:55  12:11 /usr/libexec/Xorg vt2 -displayfd 3 -auth /run/user/1001/gdm/Xauthority -background none -noreset -keeptty -verbose 3
user.l+    2546  0.0  0.0 268588  3632 ?        Ss   08:55   0:00 /usr/bin/dbus-broker-launch --scope user
user.l+    2547  0.0  0.0   7328  4888 ?        S    08:55   0:00 dbus-broker --log 4 --controller 10 --machine-id abcd1234 --max-bytes 100000000000000 --max-fds 25000000000000 --max-matc
user.l+    2549  0.0  0.0 467024 12052 tty2     Sl+  08:55   0:00 /usr/libexec/gnome-session-binary
user.l+    2565  0.0  0.0   7148   528 ?        Ss   08:55   0:00 /usr/bin/ssh-agent /bin/sh -c exec -l /bin/bash -c "/usr/bin/gnome-session"
user.l+    2637  0.0  0.0 305340  5980 ?        Ssl  08:55   0:00 /usr/libexec/at-spi-bus-launcher
user.l+    2642  0.0  0.0 268376  3244 ?        S    08:55   0:00 /usr/bin/dbus-broker-launch --config-file=/usr/share/defaults/at-spi2/accessibility.conf --scope user
user.l+    2643  0.0  0.0   5332  2924 ?        S    08:55   0:00 dbus-broker --log 4 --controller 9 --machine-id abcd1234 --max-bytes 100000000000000 --max-fds 6400000 --max-matches 5000
user.l+    2649  0.0  0.0 297288  3808 ?        Ssl  08:55   0:00 /usr/libexec/gnome-session-ctl --monitor
user.l+    2650  0.0  0.0 2279892 11592 ?       S<sl 08:55   0:00 /usr/bin/pulseaudio --daemonize=no
user.l+    2652  0.0  0.0 893820 12760 ?        Ssl  08:55   0:00 /usr/libexec/gnome-session-binary --systemd-service --session=gnome
user.l+    2666  5.4  1.0 3561588 171120 ?      Ssl  08:55   3:26 /usr/bin/gnome-shell
root        2725  0.0  0.0      0     0 ?        S<   08:55   0:00 [krfcommd]
user.l+    2734  0.0  0.0 446868  6684 ?        Ssl  08:55   0:00 /usr/libexec/gvfsd
user.l+    2739  0.0  0.0 379976  5668 ?        Sl   08:55   0:00 /usr/libexec/gvfsd-fuse /run/user/1001/gvfs -f
user.l+    2760  0.0  0.0 160784  6220 ?        Ssl  08:55   0:01 /usr/libexec/at-spi2-registryd --use-gnome-session
user.l+    2763  0.0  0.0 442852  4308 ?        Ssl  08:55   0:00 /usr/libexec/xdg-permission-store
user.l+    2767  0.0  0.1 625284 27340 ?        Ssl  08:55   0:00 /usr/libexec/gnome-shell-calendar-server
user.l+    2771  0.0  0.2 1375340 36736 ?       Ssl  08:55   0:00 /usr/libexec/evolution-source-registry
user.l+    2782  0.0  0.0 156428  5776 ?        Ssl  08:55   0:00 /usr/libexec/dconf-service
user.l+    2785  0.0  0.0 525884  9372 ?        Ssl  08:55   0:00 /usr/libexec/gvfs-udisks2-volume-monitor
root        2788  0.0  0.0 393608 14036 ?        Ssl  08:55   0:00 /usr/libexec/udisks2/udisksd
user.l+    2803  0.0  0.0 520244  6980 ?        Ssl  08:55   0:00 /usr/libexec/gvfs-afc-volume-monitor
user.l+    2808  0.0  0.0 442952  5192 ?        Ssl  08:55   0:00 /usr/libexec/gvfs-mtp-volume-monitor
user.l+    2813  0.0  0.1 892184 28988 ?        SLsl 08:55   0:00 /usr/libexec/goa-daemon
user.l+    2814  0.0  0.0 445348  5960 ?        Ssl  08:55   0:00 /usr/libexec/gvfs-gphoto2-volume-monitor
user.l+    2818  0.0  0.0 443656  6700 ?        Ssl  08:55   0:00 /usr/libexec/gvfs-goa-volume-monitor
user.l+    2828  0.0  0.0 369708  6088 ?        Ssl  08:55   0:00 /usr/libexec/gvfsd-metadata
user.l+    2829  0.0  0.0 524236  6800 ?        Ssl  08:55   0:00 /usr/libexec/goa-identity-service
user.l+    2840  0.3  0.0 519124  8500 ?        Sl   08:55   0:12 ibus-daemon --panel disable -r --xim
user.l+    2845  0.0  0.0 443820  6464 ?        Sl   08:55   0:00 /usr/libexec/ibus-dconf
user.l+    2846  0.0  0.1 474444 23448 ?        Sl   08:55   0:03 /usr/libexec/ibus-extension-gtk3
user.l+    2848  0.0  0.1 396156 18448 ?        Sl   08:55   0:00 /usr/libexec/ibus-x11 --kill-daemon
user.l+    2849  0.0  0.0 443552  6372 ?        Ssl  08:55   0:00 /usr/libexec/ibus-portal
user.l+    2871  0.0  0.0 518308  6080 ?        Ssl  08:55   0:00 /usr/libexec/gsd-a11y-settings
user.l+    2872  0.0  0.1 786672 19848 ?        Ssl  08:55   0:00 /usr/libexec/gsd-color
user.l+    2873  0.0  0.0 549396 12596 ?        Ssl  08:55   0:00 /usr/libexec/gsd-datetime
user.l+    2875  0.0  0.0 519268  6992 ?        Ssl  08:55   0:00 /usr/libexec/gsd-housekeeping
user.l+    2877  0.0  0.1 543544 17088 ?        Ssl  08:55   0:00 /usr/libexec/gsd-keyboard
user.l+    2879  0.0  0.1 1097884 20032 ?       Ssl  08:55   0:00 /usr/libexec/gsd-media-keys
user.l+    2885  0.0  0.1 618576 18376 ?        Ssl  08:55   0:00 /usr/libexec/gsd-power
user.l+    2886  0.0  0.0 458588 10668 ?        Ssl  08:55   0:00 /usr/libexec/gsd-print-notifications
user.l+    2889  0.0  0.0 665660  5796 ?        Ssl  08:55   0:00 /usr/libexec/gsd-rfkill
user.l+    2891  0.0  0.0 444068  5556 ?        Ssl  08:55   0:00 /usr/libexec/gsd-screensaver-proxy
user.l+    2892  0.0  0.0 673728  9868 ?        Ssl  08:55   0:00 /usr/libexec/gsd-sharing
user.l+    2894  0.0  0.0 598428 10512 ?        Ssl  08:55   0:00 /usr/libexec/gsd-smartcard
user.l+    2898  0.0  0.0 526164  8208 ?        Ssl  08:55   0:00 /usr/libexec/gsd-sound
user.l+    2900  0.0  0.1 469876 16872 ?        Ssl  08:55   0:00 /usr/libexec/gsd-wacom
user.l+    2905  0.0  0.0 522660  6900 ?        Ssl  08:55   0:00 /usr/libexec/gsd-wwan
user.l+    2908  0.0  0.1 544784 17896 ?        Ssl  08:55   0:00 /usr/libexec/gsd-xsettings
user.l+    2915  0.0  0.0 231836  5576 ?        Sl   08:55   0:00 /usr/libexec/gsd-disk-utility-notify
user.l+    2920  0.0  0.1 940144 23824 ?        SNl  08:55   0:00 /usr/libexec/tracker-miner-fs
user.l+    2926  0.1  0.5 1486596 82952 ?       SLl  08:55   0:05 /usr/bin/nextcloud
user.l+    2928  0.0  0.2 1907264 37488 ?       Ssl  08:55   0:00 /usr/libexec/evolution-calendar-factory
user.l+    2932  0.0  0.0 605404 10124 ?        Sl   08:55   0:00 /usr/libexec/deja-dup/deja-dup-monitor
user.l+    2938  0.0  0.2 1129232 33556 ?       Sl   08:55   0:00 /usr/libexec/evolution-data-server/evolution-alarm-notify
user.l+    2956  0.3  0.6 1135152 99780 ?       Sl   08:55   0:11 /usr/libexec/mattermost-desktop-4.3.2/mattermost-desktop --hidden
user.l+    2958  0.0  0.1 410612 16336 ?        Ssl  08:55   0:00 /usr/bin/abrt-applet --gapplication-service
user.l+    2968  0.3  1.0 1194596 165316 ?      Sl   08:55   0:15 /usr/bin/gnome-software --gapplication-service
user.l+    3032  0.0  0.0 369876  6344 ?        Sl   08:55   0:03 /usr/libexec/ibus-engine-simple
user.l+    3072  0.0  0.0 549532 14120 ?        Sl   08:55   0:00 /usr/libexec/gsd-printer
user.l+    3090  0.0  0.2 1079428 35688 ?       Ssl  08:55   0:00 /usr/libexec/evolution-addressbook-factory
root        3198  0.0  0.0 466328 13276 ?        Ssl  08:55   0:00 /usr/sbin/abrt-dbus -t133
user.l+    3242  0.0  0.1 401192 21380 ?        S    08:55   0:00 /usr/libexec/mattermost-desktop-4.3.2/mattermost-desktop --type=zygote
user.l+    3250  0.0  0.0 401192  6088 ?        S    08:55   0:00 /usr/libexec/mattermost-desktop-4.3.2/mattermost-desktop --type=zygote
user.l+    3371  1.3  0.2 502432 41724 ?        Sl   08:55   0:51 /usr/libexec/mattermost-desktop-4.3.2/mattermost-desktop --type=gpu-process --disable-features=SpareRendererForSitePerProcess --gpu-preferences=K
user.l+    3380  0.0  0.3 732708 59272 ?        Sl   08:55   0:01 /usr/libexec/mattermost-desktop-4.3.2/mattermost-desktop --type=renderer --disable-features=SpareRendererForSitePerProcess --service-pipe-token=8
user.l+    3426  3.3  1.7 3736304 275108 ?      Sl   08:55   2:06 /usr/lib64/firefox/firefox --new-window
user.l+    3451  5.8  1.0 872940 173872 ?       Sl   08:55   3:40 /usr/libexec/mattermost-desktop-4.3.2/mattermost-desktop --type=renderer --disable-features=SpareRendererForSitePerProcess --service-pipe-token=1
root        3479  0.0  0.1 555940 17800 ?        Ssl  08:55   0:00 /usr/libexec/fwupd/fwupd
user.l+    3619  0.1  0.6 2653392 97388 ?       Sl   08:55   0:06 /usr/lib64/firefox/firefox -contentproc -childID 1 -isForBrowser -prefsLen 1 -prefMapSize 214788 -parentBuildID 20200205113926 -greomni /usr/lib6
user.l+    3667  3.8  4.2 2028296 674476 ?      SLl  08:55   2:26 /usr/lib64/chromium-browser/chromium-browser --enable-plugins --enable-extensions --enable-user-scripts --enable-printing --enable-gpu-rasterizat
user.l+    3694  0.6  1.3 2837564 210280 ?      Sl   08:55   0:25 /usr/lib64/firefox/firefox -contentproc -childID 2 -isForBrowser -prefsLen 241 -prefMapSize 214788 -parentBuildID 20200205113926 -greomni /usr/li
user.l+    3701  0.1  0.8 2712544 133892 ?      Sl   08:55   0:07 /usr/lib64/firefox/firefox -contentproc -childID 3 -isForBrowser -prefsLen 241 -prefMapSize 214788 -parentBuildID 20200205113926 -greomni /usr/li
user.l+    3707  0.3  1.2 2817884 204240 ?      Sl   08:55   0:14 /usr/lib64/firefox/firefox -contentproc -childID 4 -isForBrowser -prefsLen 241 -prefMapSize 214788 -parentBuildID 20200205113926 -greomni /usr/li
user.l+    3729  1.3  0.7 2729416 124868 ?      Sl   08:55   0:51 /usr/lib64/firefox/firefox -contentproc -childID 5 -isForBrowser -prefsLen 241 -prefMapSize 214788 -parentBuildID 20200205113926 -greomni /usr/li
user.l+    3842  0.3  0.9 2785420 145504 ?      Sl   08:55   0:13 /usr/lib64/firefox/firefox -contentproc -childID 6 -isForBrowser -prefsLen 6202 -prefMapSize 214788 -parentBuildID 20200205113926 -greomni /usr/l
user.l+    3896  0.0  0.4 859168 70012 ?        Sl   08:55   0:03 krusader -qwindowtitle Krusader
user.l+    3949  0.0  0.4 572780 64812 ?        S    08:55   0:01 /usr/lib64/chromium-browser/chromium-browser --type=zygote
user.l+    3997  0.0  0.1 572780 19312 ?        S    08:55   0:00 /usr/lib64/chromium-browser/chromium-browser --type=zygote
user.l+    4040  1.3  0.9 756180 155008 ?       Sl   08:55   0:51 /usr/lib64/chromium-browser/chromium-browser --type=gpu-process --field-trial-handle=1234567890,15684897155523680431,131072 --enable-gpu
user.l+    4042  0.0  0.0 280048  5512 ?        Ss   08:55   0:00 kdeinit5: Running...
user.l+    4043  0.0  0.1 690756 27252 ?        Sl   08:55   0:00 /usr/libexec/kf5/klauncher --fd=8
user.l+    4046  0.6  0.6 716008 103164 ?       SLl  08:55   0:24 /usr/lib64/chromium-browser/chromium-browser --type=utility --field-trial-handle=1234567890,15684897155523680431,131072 --lang=en-US --s
user.l+    4185  0.3  1.1 987520 177212 ?       Sl   08:56   0:11 /usr/lib64/chromium-browser/chromium-browser --type=renderer --disable-webrtc-apm-in-audio-service --field-trial-handle=1234567890,15684
user.l+    4187  0.0  0.7 865080 114008 ?       Sl   08:56   0:00 /usr/lib64/chromium-browser/chromium-browser --type=renderer --disable-webrtc-apm-in-audio-service --field-trial-handle=1234567890,15684
user.l+    4217  0.1  0.8 952796 138824 ?       Sl   08:56   0:04 /usr/lib64/chromium-browser/chromium-browser --type=renderer --disable-webrtc-apm-in-audio-service --field-trial-handle=1234567890,15684
user.l+    4232  0.0  0.6 863624 110740 ?       Sl   08:56   0:00 /usr/lib64/chromium-browser/chromium-browser --type=renderer --disable-webrtc-apm-in-audio-service --field-trial-handle=1234567890,15684
user.l+    4240  0.0  0.8 892504 129576 ?       Sl   08:56   0:02 /usr/lib64/chromium-browser/chromium-browser --type=renderer --disable-webrtc-apm-in-audio-service --field-trial-handle=1234567890,15684
user.l+    4260  0.1  0.8 1065088 139920 ?      Sl   08:56   0:04 /usr/lib64/chromium-browser/chromium-browser --type=renderer --disable-webrtc-apm-in-audio-service --field-trial-handle=1234567890,15684
user.l+    4261  0.0  0.6 864060 110824 ?       Sl   08:56   0:00 /usr/lib64/chromium-browser/chromium-browser --type=renderer --disable-webrtc-apm-in-audio-service --field-trial-handle=1234567890,15684
user.l+    4274  0.1  0.9 930540 150648 ?       Sl   08:56   0:04 /usr/lib64/chromium-browser/chromium-browser --type=renderer --disable-webrtc-apm-in-audio-service --field-trial-handle=1234567890,15684
user.l+    4287  0.1  0.8 899024 133336 ?       Sl   08:56   0:04 /usr/lib64/chromium-browser/chromium-browser --type=renderer --disable-webrtc-apm-in-audio-service --field-trial-handle=1234567890,15684
user.l+    4288  0.0  0.8 885060 134364 ?       Sl   08:56   0:03 /usr/lib64/chromium-browser/chromium-browser --type=renderer --disable-webrtc-apm-in-audio-service --field-trial-handle=1234567890,15684
user.l+    4311  0.0  0.6 858168 107100 ?       Sl   08:56   0:00 /usr/lib64/chromium-browser/chromium-browser --type=renderer --disable-webrtc-apm-in-audio-service --field-trial-handle=1234567890,15684
user.l+    4391  0.0  0.7 879992 121872 ?       Sl   08:56   0:01 /usr/lib64/chromium-browser/chromium-browser --type=renderer --disable-webrtc-apm-in-audio-service --field-trial-handle=1234567890,15684
user.l+    4589  1.4  1.1 964852 184488 ?       Sl   08:56   0:55 /usr/lib64/chromium-browser/chromium-browser --type=renderer --disable-webrtc-apm-in-audio-service --field-trial-handle=1234567890,15684
user.l+    4602  3.4  2.4 1231760 388452 ?      Sl   08:56   2:09 /usr/lib64/chromium-browser/chromium-browser --type=renderer --disable-webrtc-apm-in-audio-service --field-trial-handle=1234567890,15684
user.l+    4823  1.3  0.2 793116 40612 ?        Rsl  08:56   0:50 /usr/libexec/gnome-terminal-server
user.l+    4833  0.0  0.0 221872  3928 pts/0    Ss   08:56   0:00 bash
user.l+    4940  0.0  0.0 216640  2864 ?        S    08:56   0:00 /bin/sh /home/user.local/.local/share/JetBrains/Toolbox/apps/PyCharm-P/ch-0/193.6494.30/bin/pycharm.sh
user.l+    4997  7.2  6.3 8199092 1021736 ?     Sl   08:56   4:28 /home/user.local/.local/share/JetBrains/Toolbox/apps/PyCharm-P/ch-0/193.6494.30/jbr/bin/java -classpath /home/user.local/.local/share/JetBrains
user.l+    5050  0.4  1.1 1323264 188444 ?      Ssl  08:56   0:15 /opt/sublime_text/sublime_text
user.l+    5064  0.0  0.1 432956 29724 ?        Sl   08:56   0:00 /opt/sublime_text/plugin_host 5050 --auto-shell-env
root        5751  0.2  0.0      0     0 ?        I    09:01   0:09 [kworker/u16:0-events_unbound]
root        5755  0.0  0.0      0     0 ?        I    09:01   0:00 [kworker/3:0-events]
user.l+    6145  0.0  0.0   6556  2948 pts/0    S    09:03   0:00 /usr/bin/slirp4netns --disable-host-loopback --mtu 65520 -c -e 3 -r 4 --netns-type=path /run/user/1001/netns/abc-26a02334-mymy-f948-5530-86e9cf8c
user.l+    6149  0.0  0.0   4096  1336 ?        Ss   09:03   0:00 /usr/bin/fuse-overlayfs -o lowerdir=/home/user.local/.local/share/containers/storage/overlay/l/6HASLV2HGWNFZQAN6TOZKNDD5S,upperdir=/home/user.l
user.l+    6178  0.0  0.0  77912  1776 ?        Ssl  09:03   0:00 /usr/bin/conmon --api-version 1 -c hijk5678 -u lmno1234
user.l+    6182  0.0  0.0   1020     4 ?        S    09:03   0:00 /pause
root        6311  0.0  0.0      0     0 ?        I<   09:03   0:00 [kworker/2:2H-kblockd]
root        6477  0.0  0.0      0     0 ?        I<   09:03   0:00 [dio/dm-1]
user.l+    7253  0.0  0.0 221872  8892 pts/1    Ss   09:04   0:00 bash
root       10324  0.2  0.0      0     0 ?        I    09:05   0:06 [kworker/u16:11-kcryptd/253:0]
user.l+   11565  0.0  0.1 706816 28976 ?        SLsl 09:06   0:00 /usr/bin/seahorse --gapplication-service
user.l+   11604  0.0  0.0 521392  7708 ?        Sl   09:06   0:00 /usr/libexec/gvfsd-trash --spawner :1.23 /org/gtk/gvfs/exec_spaw/1
user.l+   11637  0.0  0.0 438436  1072 ?        Ss   09:06   0:00 gpg-agent --homedir /home/user.local/.gnupg --use-standard-socket --daemon
user.l+   11737  0.0  0.0   7148  3676 ?        S    09:06   0:00 /usr/bin/ssh-agent -D -a /run/user/1001/keyring/.ssh
user.l+   12328  6.5  0.2 643796 39068 ?        Sl   09:07   3:25 gnome-system-monitor
root       12425  0.0  0.0      0     0 ?        I    09:07   0:00 [kworker/4:2-events]
user.l+   12533  0.0  0.0 222136  9236 pts/4    Ss+  09:07   0:00 bash
user.l+   15969  0.1  0.6 2638084 101620 ?      Sl   09:11   0:02 /usr/lib64/firefox/firefox -contentproc -childID 10 -isForBrowser -prefsLen 7309 -prefMapSize 214788 -parentBuildID 20200205113926 -greomni /usr/
user.l+   18374  0.0  0.2 2572160 47120 ?       Sl   09:14   0:00 /usr/lib64/firefox/firefox -contentproc -childID 11 -isForBrowser -prefsLen 7309 -prefMapSize 214788 -parentBuildID 20200205113926 -greomni /usr/
root       18818  0.0  0.0      0     0 ?        I<   09:15   0:00 [kworker/0:2H-kblockd]
user.l+   18997  0.0  0.2 748584 44624 ?        Sl   09:15   0:00 /usr/lib64/chromium-browser/chromium-browser --type=utility --field-trial-handle=1234567890,15684897155523680431,131072 --lang=en-US --s
root       21605  0.0  0.0      0     0 ?        I    09:18   0:00 [kworker/1:1-events]
root       26425  0.1  0.0      0     0 ?        I    09:23   0:02 [kworker/u16:2-kcryptd/253:0]
user.l+   26714  0.5  0.7 1880768 115244 ?      Sl   09:23   0:11 /snap/insomnia/55/insomnia
user.l+   26730  0.0  0.0 664676  6500 ?        Ssl  09:23   0:00 /usr/libexec/xdg-document-portal
user.l+   26782  0.0  0.1 363440 24408 ?        S    09:23   0:00 /snap/insomnia/55/insomnia --type=zygote --no-sandbox
user.l+   26802  0.0  0.0 637580 10652 ?        Ssl  09:23   0:00 /usr/bin/snap userd
root       26828  0.0  0.0      0     0 ?        I    09:23   0:01 [kworker/u16:12-kcryptd/253:0]
user.l+   26832  0.0  0.3 510508 49172 ?        Sl   09:23   0:01 /snap/insomnia/55/insomnia --type=gpu-process --no-sandbox --gpu-preferences=ABCD1234 --s
user.l+   26856  0.5  1.3 1902244 219176 ?      Sl   09:23   0:11 /snap/insomnia/55/insomnia --type=renderer --enable-experimental-web-platform-features --no-sandbox --service-pipe-token=HIJK1234
user.l+   27178  0.0  0.7 862520 111948 ?       Sl   09:24   0:00 /usr/lib64/chromium-browser/chromium-browser --type=renderer --disable-webrtc-apm-in-audio-service --field-trial-handle=1234567890,15684
root       27674  0.0  0.0      0     0 ?        I    09:26   0:00 [kworker/1:2-events]
root       27835  0.0  0.0      0     0 ?        I    09:26   0:00 [kworker/5:2-events]
root       30301  0.1  0.0      0     0 ?        I    09:35   0:01 [kworker/u16:1-kcryptd/253:0]
root       30576  0.0  0.0      0     0 ?        I    09:36   0:00 [kworker/4:0-events]
root       30870  0.0  0.0      0     0 ?        I    09:37   0:00 [kworker/7:2-events]
root       30902  0.0  0.0      0     0 ?        I<   09:37   0:00 [kworker/2:1H-events_highpri]
root       31517  0.0  0.0      0     0 ?        I    09:40   0:00 [kworker/6:0-events]
root       34035  0.0  0.0      0     0 ?        I<   09:41   0:00 [kworker/0:0H-events_highpri]
root       34241  0.1  0.0      0     0 ?        I    09:41   0:01 [kworker/u16:4-kcryptd/253:0]
root       34242  0.0  0.0      0     0 ?        I<   09:41   0:00 [kworker/4:2H-kblockd]
root       34274  0.0  0.0      0     0 ?        I<   09:41   0:00 [kworker/3:2H-kblockd]
root       34275  0.0  0.0      0     0 ?        I<   09:41   0:00 [kworker/5:2H-events_highpri]
root       34377  0.0  0.0      0     0 ?        I<   09:41   0:00 [kworker/7:2H-kblockd]
root       35767  0.0  0.0      0     0 ?        I    09:45   0:00 [kworker/0:1-events]
root       35983  0.0  0.0      0     0 ?        I    09:46   0:00 [kworker/2:2-events]
root       36342  0.0  0.0      0     0 ?        I    09:46   0:00 [kworker/5:1-events]
user.l+   37167  2.7  1.2 981996 201116 ?       Sl   09:48   0:17 /usr/lib64/chromium-browser/chromium-browser --type=renderer --disable-webrtc-apm-in-audio-service --field-trial-handle=1234567890,15684
user.l+   37186  0.0  0.7 867700 121412 ?       Sl   09:48   0:00 /usr/lib64/chromium-browser/chromium-browser --type=renderer --disable-webrtc-apm-in-audio-service --field-trial-handle=1234567890,15684
user.l+   37234  0.1  0.7 871928 126892 ?       Sl   09:48   0:00 /usr/lib64/chromium-browser/chromium-browser --type=renderer --disable-webrtc-apm-in-audio-service --field-trial-handle=1234567890,15684
user.l+   37269  0.0  0.1 605948 31004 ?        Sl   09:48   0:00 /usr/lib64/chromium-browser/chromium-browser --type=utility --field-trial-handle=1234567890,15684897155523680431,131072 --lang=en-US --s
user.l+   37865  0.0  0.4 843832 76552 ?        Sl   09:49   0:00 /usr/lib64/chromium-browser/chromium-browser --type=renderer --disable-webrtc-apm-in-audio-service --field-trial-handle=1234567890,15684
user.l+   38086  0.0  0.0   3972   324 ?        Ss   09:50   0:00 /usr/bin/fuse-overlayfs -o lowerdir=/home/user.local/.local/share/containers/storage/overlay/l/6WXCE7N3V6R4XUBKZU5Y7MUEZQ:/home/user.local/.loc
user.l+   38090  0.0  0.3 1124476 54236 ?       Sl   09:50   0:00 containers-rootlessport
user.l+   38103  0.0  0.0      0     0 ?        Z    09:50   0:00 [exe] <defunct>
root       38685  0.0  0.0      0     0 ?        I    09:52   0:00 [kworker/3:1-rcu_gp]
root       38706  0.0  0.0      0     0 ?        I    09:52   0:00 [kworker/6:1-events]
root       38839  0.0  0.0      0     0 ?        I    09:52   0:00 [kworker/2:1-events]
root       38987  0.0  0.0      0     0 ?        I    09:52   0:00 [kworker/0:2-events]
root       38988  0.0  0.0      0     0 ?        I    09:52   0:00 [kworker/7:1]
root       39275  0.0  0.0      0     0 ?        I    09:54   0:00 [kworker/5:0-events]
root       39448  0.0  0.0      0     0 ?        I    09:55   0:00 [kworker/4:1-events]
root       39888  0.1  0.0      0     0 ?        I    09:56   0:00 [kworker/u16:3-kcryptd/253:0]
root       39938  0.0  0.0      0     0 ?        I    09:56   0:00 [kworker/4:3]
user.l+   39968  0.0  0.2  62260 32996 ?        S    09:56   0:00 podman
user.l+   40284  0.2  0.1 827048 21920 pts/0    Sl+  09:57   0:00 /usr/bin/python3 /home/user.local/.local/bin/podman-compose -f podman-compose.yml -f podman-compose.elk.yml up
user.l+   40512  0.0  0.3 1272364 57224 pts/0   Sl+  09:57   0:00 podman start -a my_project_api_mariadb
user.l+   40531  0.1  0.0   6276  2916 ?        Ss   09:57   0:00 /usr/bin/fuse-overlayfs -o lowerdir=/home/user.local/.local/share/containers/storage/overlay/l/RREBLYX2CCW2X2RDDNMX62NSIF:/home/user.local/.loc
user.l+   40534  0.0  0.0  77912  1852 ?        Ssl  09:57   0:00 /usr/bin/conmon --api-version 1 -c zykx1234 -u zykx1235
2329222    40538  0.8  1.0 987732 170936 pts/0   Ssl+ 09:57   0:00 mysqld
user.l+   40558  0.0  0.3 1198376 57448 pts/0   Sl+  09:57   0:00 podman start -a my_project_elasticsearch_1
user.l+   40589  0.7  0.0   7176  4028 ?        Ss   09:57   0:00 /usr/bin/fuse-overlayfs -o lowerdir=/home/user.local/.local/share/containers/storage/overlay/l/6WXCE7N3V6R4XUBKZU5Y7MUEZQ:/home/user.local/.loc
user.l+   40591  0.0  0.0  77912  1804 ?        Ssl  09:57   0:00 /usr/bin/conmon --api-version 1 -c zykx1238 -u zykx1239
2329223    40595 77.7  9.1 6966804 1455912 ?     Sl   09:57   1:23 /usr/share/elasticsearch/jdk/bin/java -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djav
user.l+   40683  0.1  0.3 1198376 58240 pts/0   Sl+  09:57   0:00 podman start -a my_project_api_web
user.l+   40707  3.6  0.0  16996 15116 ?        Ss   09:57   0:03 /usr/bin/fuse-overlayfs -o lowerdir=/home/user.local/.local/share/containers/storage/overlay/l/XDXFL5T4GT2PC4T6WOSZNMJHXF:/home/user.local/.loc
user.l+   40709  0.0  0.0  77912  1896 ?        Ssl  09:57   0:00 /usr/bin/conmon --api-version 1 -c lmno1234 -u lmno1234
user.l+   40713  0.0  0.0   5616  3316 pts/0    Ss+  09:57   0:00 /bin/bash /entrypoint/entrypoint.sh
user.l+   40852  0.1  0.3 1198888 57636 pts/0   Sl+  09:57   0:00 podman start -a my_project_logstash_1
user.l+   40870  1.0  0.0  11332 10212 ?        Ss   09:57   0:01 /usr/bin/fuse-overlayfs -o lowerdir=/home/user.local/.local/share/containers/storage/overlay/l/CS4IKYY7IUAWLKBU4MCX5YJO2P:/home/user.local/.loc
user.l+   40872  0.0  0.0  77912  1764 ?        Ssl  09:57   0:00 /usr/bin/conmon --api-version 1 -c lmno1234 -u lmno1234
2329223    40876  176  6.4 7070980 1034308 ?     Sl   09:57   3:03 /bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true
user.l+   41003  0.1  0.3 1124900 59320 pts/0   Sl+  09:57   0:00 podman start -a my_project_api_webhook_consumer
user.l+   41019  3.6  0.0  14936 12960 ?        Rs   09:57   0:03 /usr/bin/fuse-overlayfs -o lowerdir=/home/user.local/.local/share/containers/storage/overlay/l/XDXFL5T4GT2PC4T6WOSZNMJHXF:/home/user.local/.loc
user.l+   41021  0.0  0.0  77912  1940 ?        Ssl  09:57   0:00 /usr/bin/conmon --api-version 1 -c lmno1234 -u lmno1234
user.l+   41025  0.0  0.0   5616  3456 pts/0    Ss+  09:57   0:00 /bin/bash /entrypoint/entrypoint_podman.sh
user.l+   41054  0.1  0.3 977436 58900 pts/0    Sl+  09:57   0:00 podman start -a my_project_api_classifier
user.l+   41069  1.1  0.0  16384 14176 ?        Ss   09:57   0:01 /usr/bin/fuse-overlayfs -o lowerdir=/home/user.local/.local/share/containers/storage/overlay/l/XDXFL5T4GT2PC4T6WOSZNMJHXF:/home/user.local/.loc
user.l+   41071  0.0  0.0  77912  1940 ?        Ssl  09:57   0:00 /usr/bin/conmon --api-version 1 -c lmno1234 -u lmno1234
user.l+   41076  0.0  0.0   5616  3392 pts/0    Ss+  09:57   0:00 /bin/bash /entrypoint/entrypoint.sh
user.l+   41089  0.1  0.3 1050656 58628 pts/0   Sl+  09:57   0:00 podman start -a my_project_kibana_1
2329223    41094  0.0  0.0  70452  8472 ?        Sl   09:57   0:00 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller
user.l+   41115  3.7  0.1  33492 31460 ?        Ss   09:57   0:03 /usr/bin/fuse-overlayfs -o lowerdir=/home/user.local/.local/share/containers/storage/overlay/l/F5KMLSLQLCT6KYW2GM74IN24KL:/home/user.local/.loc
user.l+   41117  0.0  0.0  77912  1916 ?        Ssl  09:57   0:00 /usr/bin/conmon --api-version 1 -c lmno1234 -u lmno1234
2329223    41121  0.0  0.0    208     4 ?        S    09:57   0:00 /usr/local/bin/dumb-init -- /usr/local/bin/kibana-docker
2329223    41137 66.2  2.2 1721012 355468 ?      Ssl  09:57   1:06 /usr/share/kibana/bin/../node/bin/node /usr/share/kibana/bin/../src/cli --cpu.cgroup.path.override=/ --cpuacct.cgroup.path.override=/
user.l+   41329  0.1  0.3 1124900 57732 pts/0   Sl+  09:57   0:00 podman start -a my_project_filebeat_1
user.l+   41603  0.0  0.0   7584  4216 ?        Ss   09:57   0:00 /usr/bin/fuse-overlayfs -o lowerdir=/home/user.local/.local/share/containers/storage/overlay/l/PVV2ZEQUUOQYNSUWNT7D74DVRI:/home/user.local/.loc
user.l+   41617  0.0  0.0  77912  1916 ?        Ssl  09:57   0:00 /usr/bin/conmon --api-version 1 -c lmno1234 -u lmno1234
user.l+   41630  0.0  0.0  11696  2564 ?        S    09:57   0:00 bash -c cp -fR /usr/share/filebeat/filebeat.yml.orig /usr/share/filebeat/filebeat.yml && chmod go-w /usr/share/filebeat/filebeat.yml && filebeat 
user.l+   41713  0.1  0.2 573384 47404 ?        Sl   09:57   0:00 filebeat -e -E -strict.perms=false
user.l+   41746 20.4 17.4 3848968 2794220 pts/0 S+   09:57   0:19 python /code/manage.py my_command --settings=project.settings.docker --clf-batch-size 1000 --clf-dir /models/ --delay 10
user.l+   41747  0.0  0.0   4076   676 pts/0    S+   09:57   0:00 tail -f /dev/null
user.l+   41749  3.8  0.8 683356 137700 pts/0   S+   09:57   0:03 python /code/manage.py shell_plus --notebook --settings=project.settings.docker
user.l+   41750  3.0  0.7 573236 120980 pts/0   S+   09:57   0:02 python /code/manage.py runserver 0.0.0.0:8000 --settings=project.settings.docker
user.l+   41803  0.0  0.0  14456 10732 pts/0    S+   09:57   0:00 /usr/local/bin/python -c from multiprocessing.semaphore_tracker import main;main(11)
user.l+   41804 15.5  0.8 720000 136016 pts/0   Sl+  09:57   0:14 /usr/local/bin/python /code/manage.py runserver 0.0.0.0:8000 --settings=project.settings.docker
user.l+   41959  0.3  0.0 221872  8952 pts/2    Ss   09:58   0:00 bash
root       42111  0.4  0.0      0     0 ?        I    09:58   0:00 [kworker/u16:8-kcryptd/253:0]
root       42208  0.2  0.0      0     0 ?        I    09:58   0:00 [kworker/u16:9-events_unbound]
user.l+   42345  0.6  0.3 1346096 55896 pts/2   Sl+  09:58   0:00 podman exec -it --log-level=debug my_project_elasticsearch_1 /bin/bash
user.l+   42360  0.0  0.0      0     0 pts/2    Z    09:58   0:00 [conmon] <defunct>
user.l+   42361  0.0  0.0  77912  2268 ?        Ssl  09:58   0:00 /usr/bin/conmon --api-version 1 -c zykx1238 -u lmno1234
user.l+   42369  0.1  0.0  11836  3052 pts/0    Ss+  09:58   0:00 /bin/bash
root       42392  0.0  0.0      0     0 ?        I    09:58   0:00 [kworker/0:0]
user.l+   42474  190  0.7 551224 113292 pts/0   R+   09:58   0:01 python /code/manage.py webhook --settings=project.settings.docker
user.l+   42485  0.0  0.0 219144  3804 pts/1    R+   09:59   0:00 ps auxw

@barseghyanartur
Copy link
Author

@AkihiroSuda:

$ podman run -p 9200:9200 -it --log-level=debug -e "discovery.type=single-node" -e "xpack.security.enabled=false" docker.elastic.co/elasticsearch/elasticsearch:7.5.2
DEBU[0000] Reading configuration file "/home/user.local/.config/containers/libpod.conf" 
DEBU[0000] Merged system config "/home/user.local/.config/containers/libpod.conf": &{{false false false true true true} 0 {   [] [] []} /home/user.local/.local/share/containers/storage/volumes docker://  /usr/bin/crun map[runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc]] [] [] [] [/usr/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon] [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] cgroupfs /usr/libexec/podman/catatonit /home/user.local/.local/share/containers/storage/libpod /run/user/1001/libpod/tmp -1 false /etc/cni/net.d/ [/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin]  []   k8s.gcr.io/pause:3.1 /pause true true  2048  journald  ctrl-p,ctrl-q false false} 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /home/user.local/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /home/user.local/.local/share/containers/storage 
DEBU[0000] Using run root /run/user/1001                
DEBU[0000] Using static dir /home/user.local/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1001/libpod/tmp      
DEBU[0000] Using volume path /home/user.local/.local/share/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] No store required. Not opening container store. 
DEBU[0000] Initializing event backend journald          
DEBU[0000] using runtime "/usr/bin/runc"                
DEBU[0000] using runtime "/usr/bin/crun"                
INFO[0000] running as rootless                          
DEBU[0000] Reading configuration file "/home/user.local/.config/containers/libpod.conf" 
DEBU[0000] Merged system config "/home/user.local/.config/containers/libpod.conf": &{{false false false true true true} 0 {   [] [] []} /home/user.local/.local/share/containers/storage/volumes docker://  /usr/bin/crun map[runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc]] [] [] [] [/usr/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon] [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] cgroupfs /usr/libexec/podman/catatonit /home/user.local/.local/share/containers/storage/libpod /run/user/1001/libpod/tmp -1 false /etc/cni/net.d/ [/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin]  []   k8s.gcr.io/pause:3.1 /pause true true  2048  journald  ctrl-p,ctrl-q false false} 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /home/user.local/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /home/user.local/.local/share/containers/storage 
DEBU[0000] Using run root /run/user/1001                
DEBU[0000] Using static dir /home/user.local/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1001/libpod/tmp      
DEBU[0000] Using volume path /home/user.local/.local/share/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs 
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false 
DEBU[0000] Initializing event backend journald          
DEBU[0000] using runtime "/usr/bin/runc"                
DEBU[0000] using runtime "/usr/bin/crun"                
DEBU[0000] parsed reference into "[overlay@/home/user.local/.local/share/containers/storage+/run/user/1001:overlay.mount_program=/usr/bin/fuse-overlayfs]docker.elastic.co/elasticsearch/elasticsearch:7.5.2" 
DEBU[0000] parsed reference into "[overlay@/home/user.local/.local/share/containers/storage+/run/user/1001:overlay.mount_program=/usr/bin/fuse-overlayfs]@929d271f17988709f8e34bc2e907265f6dc9fc5742326349e0ad808bb213f97a" 
DEBU[0000] exporting opaque data as blob "sha256:929d271f17988709f8e34bc2e907265f6dc9fc5742326349e0ad808bb213f97a" 
DEBU[0000] Using slirp4netns netmode                    
DEBU[0000] No hostname set; container's hostname will default to runtime default 
DEBU[0000] Loading seccomp profile from "/usr/share/containers/seccomp.json" 
DEBU[0000] created OCI spec and options for new container 
DEBU[0000] Allocated lock 53 for container aefdc6bf01a5669da005ac7687fd23a964d98421a2f9c7545173a16b65b31677 
DEBU[0000] parsed reference into "[overlay@/home/user.local/.local/share/containers/storage+/run/user/1001:overlay.mount_program=/usr/bin/fuse-overlayfs]@929d271f17988709f8e34bc2e907265f6dc9fc5742326349e0ad808bb213f97a" 
DEBU[0000] exporting opaque data as blob "sha256:929d271f17988709f8e34bc2e907265f6dc9fc5742326349e0ad808bb213f97a" 
DEBU[0000] created container "aefdc6bf01a5669da005ac7687fd23a964d98421a2f9c7545173a16b65b31677" 
DEBU[0000] container "aefdc6bf01a5669da005ac7687fd23a964d98421a2f9c7545173a16b65b31677" has work directory "/home/user.local/.local/share/containers/storage/overlay-containers/aefdc6bf01a5669da005ac7687fd23a964d98421a2f9c7545173a16b65b31677/userdata" 
DEBU[0000] container "aefdc6bf01a5669da005ac7687fd23a964d98421a2f9c7545173a16b65b31677" has run directory "/run/user/1001/overlay-containers/aefdc6bf01a5669da005ac7687fd23a964d98421a2f9c7545173a16b65b31677/userdata" 
DEBU[0000] New container created "aefdc6bf01a5669da005ac7687fd23a964d98421a2f9c7545173a16b65b31677" 
DEBU[0000] container "aefdc6bf01a5669da005ac7687fd23a964d98421a2f9c7545173a16b65b31677" has CgroupParent "/libpod_parent/libpod-aefdc6bf01a5669da005ac7687fd23a964d98421a2f9c7545173a16b65b31677" 
DEBU[0000] Handling terminal attach                     
DEBU[0000] Made network namespace at /run/user/1001/netns/cni-d8e27b8a-26b8-0487-6be3-23b84a1e97ae for container aefdc6bf01a5669da005ac7687fd23a964d98421a2f9c7545173a16b65b31677 
DEBU[0000] overlay: mount_data=lowerdir=/home/user.local/.local/share/containers/storage/overlay/l/6WXCE7N3V6R4XUBKZU5Y7MUEZQ:/home/user.local/.local/share/containers/storage/overlay/l/JGQGLDAJC7LXVDMF3IIJKE34FF:/home/user.local/.local/share/containers/storage/overlay/l/H5HDE23Z4C3LKVUKSH6DEBXXWR:/home/user.local/.local/share/containers/storage/overlay/l/O4YHP2WGSUTHX5GL6WFA6BUDZA:/home/user.local/.local/share/containers/storage/overlay/l/HQR3MBLLETTJAYQJEBEEVGFRRX:/home/user.local/.local/share/containers/storage/overlay/l/ZLABIKWEGXVR77A7Q6LBKFLY7A:/home/user.local/.local/share/containers/storage/overlay/l/PY7YLJRFTPCJPD7B2HQ5K3SYPF,upperdir=/home/user.local/.local/share/containers/storage/overlay/cadc9cd9177faf0b272720fcc72052c96370524808444d73d6b0fdf7d38b8a19/diff,workdir=/home/user.local/.local/share/containers/storage/overlay/cadc9cd9177faf0b272720fcc72052c96370524808444d73d6b0fdf7d38b8a19/work 
DEBU[0000] slirp4netns command: /usr/bin/slirp4netns --disable-host-loopback --mtu 65520 -c -e 3 -r 4 --netns-type=path /run/user/1001/netns/cni-d8e27b8a-26b8-0487-6be3-23b84a1e97ae tap0 
DEBU[0000] mounted container "aefdc6bf01a5669da005ac7687fd23a964d98421a2f9c7545173a16b65b31677" at "/home/user.local/.local/share/containers/storage/overlay/cadc9cd9177faf0b272720fcc72052c96370524808444d73d6b0fdf7d38b8a19/merged" 
DEBU[0000] rootlessport: time="2020-02-12T10:10:48+01:00" level=info msg="starting parent driver"
                                                                                                 time="2020-02-12T10:10:48+01:00" level=info msg="opaque=map[builtin.readypipepath:/run/user/1001/libpod/tmp/rootlessport758295676/.bp-ready.pipe builtin.socketpath:/run/user/1001/libpod/tmp/rootlessport758295676/.bp.sock]" 
DEBU[0000] rootlessport: time="2020-02-12T10:10:48+01:00" level=info msg="starting child driver in child netns (\"/proc/45630/exe\" [containers-rootlessport-child])" 
DEBU[0000] rootlessport: time="2020-02-12T10:10:48+01:00" level=info msg="waiting for initComplete" 
DEBU[0000] rootlessport: time="2020-02-12T10:10:48+01:00" level=info msg="initComplete is closed; parent and child established the communication channel" 
DEBU[0000] rootlessport: time="2020-02-12T10:10:48+01:00" level=info msg="exposing ports [{9200 9200 tcp }]" 
DEBU[0000] rootlessport: time="2020-02-12T10:10:48+01:00" level=info msg=ready 
DEBU[0000] rootlessport: time="2020-02-12T10:10:48+01:00" level=info msg="waiting for exitfd to be closed" 
DEBU[0000] rootlessport is ready                        
DEBU[0000] Created root filesystem for container aefdc6bf01a5669da005ac7687fd23a964d98421a2f9c7545173a16b65b31677 at /home/user.local/.local/share/containers/storage/overlay/cadc9cd9177faf0b272720fcc72052c96370524808444d73d6b0fdf7d38b8a19/merged 
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode secret 
DEBU[0000] Setting CGroup path for container aefdc6bf01a5669da005ac7687fd23a964d98421a2f9c7545173a16b65b31677 to /libpod_parent/libpod-aefdc6bf01a5669da005ac7687fd23a964d98421a2f9c7545173a16b65b31677 
DEBU[0000] Created OCI spec for container aefdc6bf01a5669da005ac7687fd23a964d98421a2f9c7545173a16b65b31677 at /home/user.local/.local/share/containers/storage/overlay-containers/aefdc6bf01a5669da005ac7687fd23a964d98421a2f9c7545173a16b65b31677/userdata/config.json 
DEBU[0000] /usr/bin/conmon messages will be logged to syslog 
DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -c aefdc6bf01a5669da005ac7687fd23a964d98421a2f9c7545173a16b65b31677 -u aefdc6bf01a5669da005ac7687fd23a964d98421a2f9c7545173a16b65b31677 -r /usr/bin/crun -b /home/user.local/.local/share/containers/storage/overlay-containers/aefdc6bf01a5669da005ac7687fd23a964d98421a2f9c7545173a16b65b31677/userdata -p /run/user/1001/overlay-containers/aefdc6bf01a5669da005ac7687fd23a964d98421a2f9c7545173a16b65b31677/userdata/pidfile -l k8s-file:/home/user.local/.local/share/containers/storage/overlay-containers/aefdc6bf01a5669da005ac7687fd23a964d98421a2f9c7545173a16b65b31677/userdata/ctr.log --exit-dir /run/user/1001/libpod/tmp/exits --socket-dir-path /run/user/1001/libpod/tmp/socket --log-level debug --syslog -t --conmon-pidfile /run/user/1001/overlay-containers/aefdc6bf01a5669da005ac7687fd23a964d98421a2f9c7545173a16b65b31677/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/user.local/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1001 --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/1001/libpod/tmp --exit-command-arg --runtime --exit-command-arg /usr/bin/crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mount_program=/usr/bin/fuse-overlayfs --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg aefdc6bf01a5669da005ac7687fd23a964d98421a2f9c7545173a16b65b31677]"
DEBU[0000] Received: 45663                              
INFO[0000] Got Conmon PID as 45659                      
DEBU[0000] Created container aefdc6bf01a5669da005ac7687fd23a964d98421a2f9c7545173a16b65b31677 in OCI runtime 
DEBU[0000] Attaching to container aefdc6bf01a5669da005ac7687fd23a964d98421a2f9c7545173a16b65b31677 
DEBU[0000] connecting to socket /run/user/1001/libpod/tmp/socket/aefdc6bf01a5669da005ac7687fd23a964d98421a2f9c7545173a16b65b31677/attach 
DEBU[0000] Received a resize event: {Width:212 Height:44} 
DEBU[0000] Starting container aefdc6bf01a5669da005ac7687fd23a964d98421a2f9c7545173a16b65b31677 with command [/usr/local/bin/docker-entrypoint.sh eswrapper] 
DEBU[0000] Started container aefdc6bf01a5669da005ac7687fd23a964d98421a2f9c7545173a16b65b31677 
DEBU[0000] Enabling signal proxying                     
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
{"type": "server", "timestamp": "2020-02-12T09:10:54,383Z", "level": "INFO", "component": "o.e.e.NodeEnvironment", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "using [1] data paths, mounts [[/ (fuse-overlayfs)]], net usable_space [355.8gb], net total_space [467.4gb], types [fuse.fuse-overlayfs]" }
{"type": "server", "timestamp": "2020-02-12T09:10:54,395Z", "level": "INFO", "component": "o.e.e.NodeEnvironment", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "heap size [989.8mb], compressed ordinary object pointers [true]" }
{"type": "server", "timestamp": "2020-02-12T09:10:54,400Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "node name [aefdc6bf01a5], node ID [xEq3hFWyTHudzYOR2VKVrg], cluster name [docker-cluster]" }
{"type": "server", "timestamp": "2020-02-12T09:10:54,401Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "version[7.5.2], pid[1], build[default/docker/8bec50e1e0ad29dad5653712cf3bb580cd1afcdf/2020-01-15T12:11:52.313576Z], OS[Linux/5.4.17-200.fc31.x86_64/amd64], JVM[AdoptOpenJDK/OpenJDK 64-Bit Server VM/13.0.1/13.0.1+9]" }
{"type": "server", "timestamp": "2020-02-12T09:10:54,401Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "JVM home [/usr/share/elasticsearch/jdk]" }
{"type": "server", "timestamp": "2020-02-12T09:10:54,402Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "JVM arguments [-Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=COMPAT, -Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.io.tmpdir=/tmp/elasticsearch-15643426398289029317, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Des.cgroups.hierarchy.override=/, -XX:MaxDirectMemorySize=536870912, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=docker, -Des.bundled_jdk=true]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,124Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [aggs-matrix-stats]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,124Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [analysis-common]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,125Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [flattened]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,125Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [frozen-indices]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,125Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [ingest-common]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,125Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [ingest-geoip]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,125Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [ingest-user-agent]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,126Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [lang-expression]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,126Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [lang-mustache]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,126Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [lang-painless]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,126Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [mapper-extras]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,126Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [parent-join]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,126Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [percolator]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,127Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [rank-eval]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,127Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [reindex]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,127Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [repository-url]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,127Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [search-business-rules]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,127Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [spatial]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,128Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [transform]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,128Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [transport-netty4]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,128Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [vectors]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,128Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [x-pack-analytics]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,128Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [x-pack-ccr]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,128Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [x-pack-core]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,129Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [x-pack-deprecation]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,129Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [x-pack-enrich]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,129Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [x-pack-graph]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,129Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [x-pack-ilm]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,129Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [x-pack-logstash]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,129Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [x-pack-ml]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,130Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [x-pack-monitoring]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,130Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [x-pack-rollup]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,130Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [x-pack-security]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,130Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [x-pack-sql]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,130Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [x-pack-voting-only-node]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,131Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "loaded module [x-pack-watcher]" }
{"type": "server", "timestamp": "2020-02-12T09:10:57,131Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "no plugins loaded" }
{"type": "server", "timestamp": "2020-02-12T09:11:01,263Z", "level": "INFO", "component": "o.e.x.m.p.l.CppLogMessageHandler", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "[controller/102] [Main.cc@110] controller (64 bit): Version 7.5.2 (Build 68f6981dfb8e2d) Copyright (c) 2020 Elasticsearch BV" }
{"type": "server", "timestamp": "2020-02-12T09:11:02,017Z", "level": "INFO", "component": "o.e.d.DiscoveryModule", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "using discovery type [single-node] and seed hosts providers [settings]" }
{"type": "server", "timestamp": "2020-02-12T09:11:03,122Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "initialized" }
{"type": "server", "timestamp": "2020-02-12T09:11:03,122Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "starting ..." }
{"type": "server", "timestamp": "2020-02-12T09:11:03,352Z", "level": "INFO", "component": "o.e.t.TransportService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "publish_address {10.0.2.100:9300}, bound_addresses {[::]:9300}" }
{"type": "server", "timestamp": "2020-02-12T09:11:03,548Z", "level": "WARN", "component": "o.e.b.BootstrapChecks", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "max file descriptors [1024] for elasticsearch process is too low, increase to at least [65535]" }
{"type": "server", "timestamp": "2020-02-12T09:11:03,549Z", "level": "WARN", "component": "o.e.b.BootstrapChecks", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]" }
{"type": "server", "timestamp": "2020-02-12T09:11:03,557Z", "level": "INFO", "component": "o.e.c.c.Coordinator", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "setting initial configuration to VotingConfiguration{xEq3hFWyTHudzYOR2VKVrg}" }
{"type": "server", "timestamp": "2020-02-12T09:11:03,743Z", "level": "INFO", "component": "o.e.c.s.MasterService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "elected-as-master ([1] nodes joined)[{aefdc6bf01a5}{xEq3hFWyTHudzYOR2VKVrg}{e-JeaoruRMuSd-4EMuNvjA}{10.0.2.100}{10.0.2.100:9300}{dilm}{ml.machine_memory=16372674560, xpack.installed=true, ml.max_open_jobs=20} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 1, version: 1, delta: master node changed {previous [], current [{aefdc6bf01a5}{xEq3hFWyTHudzYOR2VKVrg}{e-JeaoruRMuSd-4EMuNvjA}{10.0.2.100}{10.0.2.100:9300}{dilm}{ml.machine_memory=16372674560, xpack.installed=true, ml.max_open_jobs=20}]}" }
{"type": "server", "timestamp": "2020-02-12T09:11:03,788Z", "level": "INFO", "component": "o.e.c.c.CoordinationState", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "cluster UUID set to [kBgxSZAdRWiq2O7D5oMq_Q]" }
{"type": "server", "timestamp": "2020-02-12T09:11:03,817Z", "level": "INFO", "component": "o.e.c.s.ClusterApplierService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "master node changed {previous [], current [{aefdc6bf01a5}{xEq3hFWyTHudzYOR2VKVrg}{e-JeaoruRMuSd-4EMuNvjA}{10.0.2.100}{10.0.2.100:9300}{dilm}{ml.machine_memory=16372674560, xpack.installed=true, ml.max_open_jobs=20}]}, term: 1, version: 1, reason: Publication{term=1, version=1}" }
{"type": "server", "timestamp": "2020-02-12T09:11:03,937Z", "level": "INFO", "component": "o.e.h.AbstractHttpServerTransport", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "publish_address {10.0.2.100:9200}, bound_addresses {[::]:9200}", "cluster.uuid": "kBgxSZAdRWiq2O7D5oMq_Q", "node.id": "xEq3hFWyTHudzYOR2VKVrg"  }
{"type": "server", "timestamp": "2020-02-12T09:11:03,937Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "started", "cluster.uuid": "kBgxSZAdRWiq2O7D5oMq_Q", "node.id": "xEq3hFWyTHudzYOR2VKVrg"  }
{"type": "server", "timestamp": "2020-02-12T09:11:04,006Z", "level": "INFO", "component": "o.e.g.GatewayService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "recovered [0] indices into cluster_state", "cluster.uuid": "kBgxSZAdRWiq2O7D5oMq_Q", "node.id": "xEq3hFWyTHudzYOR2VKVrg"  }
{"type": "server", "timestamp": "2020-02-12T09:11:04,162Z", "level": "INFO", "component": "o.e.c.m.MetaDataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "adding template [.watches] for index patterns [.watches*]", "cluster.uuid": "kBgxSZAdRWiq2O7D5oMq_Q", "node.id": "xEq3hFWyTHudzYOR2VKVrg"  }
{"type": "server", "timestamp": "2020-02-12T09:11:04,255Z", "level": "INFO", "component": "o.e.c.m.MetaDataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "adding template [.watch-history-10] for index patterns [.watcher-history-10*]", "cluster.uuid": "kBgxSZAdRWiq2O7D5oMq_Q", "node.id": "xEq3hFWyTHudzYOR2VKVrg"  }
{"type": "server", "timestamp": "2020-02-12T09:11:04,306Z", "level": "INFO", "component": "o.e.c.m.MetaDataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "adding template [.triggered_watches] for index patterns [.triggered_watches*]", "cluster.uuid": "kBgxSZAdRWiq2O7D5oMq_Q", "node.id": "xEq3hFWyTHudzYOR2VKVrg"  }
{"type": "server", "timestamp": "2020-02-12T09:11:04,347Z", "level": "INFO", "component": "o.e.c.m.MetaDataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "adding template [.slm-history] for index patterns [.slm-history-1*]", "cluster.uuid": "kBgxSZAdRWiq2O7D5oMq_Q", "node.id": "xEq3hFWyTHudzYOR2VKVrg"  }
{"type": "server", "timestamp": "2020-02-12T09:11:04,397Z", "level": "INFO", "component": "o.e.c.m.MetaDataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "adding template [.monitoring-logstash] for index patterns [.monitoring-logstash-7-*]", "cluster.uuid": "kBgxSZAdRWiq2O7D5oMq_Q", "node.id": "xEq3hFWyTHudzYOR2VKVrg"  }
{"type": "server", "timestamp": "2020-02-12T09:11:04,484Z", "level": "INFO", "component": "o.e.c.m.MetaDataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "adding template [.monitoring-es] for index patterns [.monitoring-es-7-*]", "cluster.uuid": "kBgxSZAdRWiq2O7D5oMq_Q", "node.id": "xEq3hFWyTHudzYOR2VKVrg"  }
{"type": "server", "timestamp": "2020-02-12T09:11:04,562Z", "level": "INFO", "component": "o.e.c.m.MetaDataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "adding template [.monitoring-beats] for index patterns [.monitoring-beats-7-*]", "cluster.uuid": "kBgxSZAdRWiq2O7D5oMq_Q", "node.id": "xEq3hFWyTHudzYOR2VKVrg"  }
{"type": "server", "timestamp": "2020-02-12T09:11:04,636Z", "level": "INFO", "component": "o.e.c.m.MetaDataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "adding template [.monitoring-alerts-7] for index patterns [.monitoring-alerts-7]", "cluster.uuid": "kBgxSZAdRWiq2O7D5oMq_Q", "node.id": "xEq3hFWyTHudzYOR2VKVrg"  }
{"type": "server", "timestamp": "2020-02-12T09:11:04,706Z", "level": "INFO", "component": "o.e.c.m.MetaDataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "adding template [.monitoring-kibana] for index patterns [.monitoring-kibana-7-*]", "cluster.uuid": "kBgxSZAdRWiq2O7D5oMq_Q", "node.id": "xEq3hFWyTHudzYOR2VKVrg"  }
{"type": "server", "timestamp": "2020-02-12T09:11:04,762Z", "level": "INFO", "component": "o.e.x.i.a.TransportPutLifecycleAction", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "adding index lifecycle policy [watch-history-ilm-policy]", "cluster.uuid": "kBgxSZAdRWiq2O7D5oMq_Q", "node.id": "xEq3hFWyTHudzYOR2VKVrg"  }
{"type": "server", "timestamp": "2020-02-12T09:11:04,819Z", "level": "INFO", "component": "o.e.x.i.a.TransportPutLifecycleAction", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "adding index lifecycle policy [slm-history-ilm-policy]", "cluster.uuid": "kBgxSZAdRWiq2O7D5oMq_Q", "node.id": "xEq3hFWyTHudzYOR2VKVrg"  }
{"type": "server", "timestamp": "2020-02-12T09:11:05,031Z", "level": "INFO", "component": "o.e.l.LicenseService", "cluster.name": "docker-cluster", "node.name": "aefdc6bf01a5", "message": "license [388a9b89-5592-45a5-9a81-c79c15635214] mode [basic] - valid", "cluster.uuid": "kBgxSZAdRWiq2O7D5oMq_Q", "node.id": "xEq3hFWyTHudzYOR2VKVrg"  }

This one runs properly (I can connect to it from outside the container).

@AkihiroSuda
Copy link
Collaborator

So, the issue is specific to podman-compose?

@AkihiroSuda AkihiroSuda changed the title /kind bug Unable to access application publically/outside after exposing port with podman [podman-compose specific?] /kind bug Unable to access application publically/outside after exposing port with podman Feb 12, 2020
@barseghyanartur
Copy link
Author

@AkihiroSuda:

It might be, but it used to work fine before the Podman update (from 1.6.0 to 1.8.0). Do you have any grasp on what could be the cause?

@AkihiroSuda
Copy link
Collaborator

can't reproduce

version: '3'
services:
  nginx:
    image: "docker.io/library/nginx:alpine"
    ports:
      - "8080:80"

@cbz
Copy link

cbz commented Feb 12, 2020

My impression (experiencing the same issue under podman-compose) is it varies from container to container and is sometimes an intermittent issue. See containers/podman-compose#107 (comment)

@barseghyanartur
Copy link
Author

@AkihiroSuda:

Perhaps test with multiple services?

@barseghyanartur
Copy link
Author

This is what podman-compose up does (under the hood):

podman pod create --name=my_project --share net -p 8888:8888 -p 5044:5044 -p 8080:8080 -p 9200:9200 -p 8000:8000 -p 3306:3306 -p 5601:5601 -p 9300:9300
0
podman create --name=my_project_api_mariadb --pod=my_project --label io.podman.compose.config-hash=123 --label io.podman.compose.project=my_project --label io.podman.compose.version=0.0.1 --label com.docker.compose.container-number=1 --label com.docker.compose.service=mariadb -e MYSQL_DATABASE=my_project_api -e MYSQL_ROOT_PASSWORD=Fr3nsp@ss -e MYSQL_USER=my_project_api -e MYSQL_PASSWORD=Fr3nsp@ss --mount type=bind,source=/home/user.local/repos/my_project/./docker/mysql/data,destination=/var/lib/mysql --mount type=bind,source=/home/user.local/repos/my_project/./docker/mysql/my.cnf.d,destination=/etc/mysql/conf.d,ro --add-host web:127.0.0.1 --add-host my_project_api_web:127.0.0.1 --add-host webhookconsumer:127.0.0.1 --add-host my_project_api_webhook_consumer:127.0.0.1 --add-host classifier:127.0.0.1 --add-host my_project_api_classifier:127.0.0.1 --add-host mariadb:127.0.0.1 --add-host my_project_api_mariadb:127.0.0.1 --add-host mailprocessor:127.0.0.1 --add-host my_project_api_mail:127.0.0.1 --add-host elasticsearch:127.0.0.1 --add-host my_project_elasticsearch_1:127.0.0.1 --add-host logstash:127.0.0.1 --add-host my_project_logstash_1:127.0.0.1 --add-host kibana:127.0.0.1 --add-host my_project_kibana_1:127.0.0.1 --add-host filebeat:127.0.0.1 --add-host my_project_filebeat_1:127.0.0.1 -i --tty mariadb:10.1.43
0bc3f5c1d399d0697d6b0dfeb5900b3c776c8174f676e7c9115e64be5cf34134
0
podman create --name=my_project_elasticsearch_1 --pod=my_project --label io.podman.compose.config-hash=123 --label io.podman.compose.project=my_project --label io.podman.compose.version=0.0.1 --label com.docker.compose.container-number=1 --label com.docker.compose.service=elasticsearch -e discovery.type=single-node -e xpack.security.enabled=true -e ELASTIC_PASSWORD=changeme --add-host web:127.0.0.1 --add-host my_project_api_web:127.0.0.1 --add-host webhookconsumer:127.0.0.1 --add-host my_project_api_webhook_consumer:127.0.0.1 --add-host classifier:127.0.0.1 --add-host my_project_api_classifier:127.0.0.1 --add-host mariadb:127.0.0.1 --add-host my_project_api_mariadb:127.0.0.1 --add-host mailprocessor:127.0.0.1 --add-host my_project_api_mail:127.0.0.1 --add-host elasticsearch:127.0.0.1 --add-host my_project_elasticsearch_1:127.0.0.1 --add-host logstash:127.0.0.1 --add-host my_project_logstash_1:127.0.0.1 --add-host kibana:127.0.0.1 --add-host my_project_kibana_1:127.0.0.1 --add-host filebeat:127.0.0.1 --add-host my_project_filebeat_1:127.0.0.1 docker.elastic.co/elasticsearch/elasticsearch:7.5.2
5e53c4d16e497884b15288048c856495724a2f5e1a2f9818ce7c881becf18642
0
podman create --name=my_project_api_web --pod=my_project --label io.podman.compose.config-hash=123 --label io.podman.compose.project=my_project --label io.podman.compose.version=0.0.1 --label com.docker.compose.container-number=1 --label com.docker.compose.service=web -e MYSQL_HOST=mariadb -e MYSQL_DATABASE=my_project_api -e MYSQL_ROOT_PASSWORD=Fr3nsp@ss -e MYSQL_USER=my_project_api -e MYSQL_PASSWORD=Fr3nsp@ss -e DJANGO_SETTINGS_MODULE=project.settings.docker -e CQLENG_ALLOW_SCHEMA_MANAGEMENT='True' -e KAFKA_HOST=kafka -e KAFKA_PORT=9092 --mount type=bind,source=/home/user.local/repos/my_project/./docker/web,destination=/entrypoint --mount type=bind,source=/home/user.local/repos/my_project/./docker/web/jupyter,destination=/root/.jupyter/ --mount type=bind,source=/home/user.local/repos/my_project/./web,destination=/code --mount type=bind,source=/home/user.local/repos/my_project/./web/static,destination=/static --mount type=bind,source=/home/user.local/repos/my_project/./web/media,destination=/media --mount type=bind,source=/home/user.local/repos/my_project/./docker/logs,destination=/logs --mount type=bind,source=/home/user.local/repos/my_project/./build,destination=/build --add-host web:127.0.0.1 --add-host my_project_api_web:127.0.0.1 --add-host webhookconsumer:127.0.0.1 --add-host my_project_api_webhook_consumer:127.0.0.1 --add-host classifier:127.0.0.1 --add-host my_project_api_classifier:127.0.0.1 --add-host mariadb:127.0.0.1 --add-host my_project_api_mariadb:127.0.0.1 --add-host mailprocessor:127.0.0.1 --add-host my_project_api_mail:127.0.0.1 --add-host elasticsearch:127.0.0.1 --add-host my_project_elasticsearch_1:127.0.0.1 --add-host logstash:127.0.0.1 --add-host my_project_logstash_1:127.0.0.1 --add-host kibana:127.0.0.1 --add-host my_project_kibana_1:127.0.0.1 --add-host filebeat:127.0.0.1 --add-host my_project_filebeat_1:127.0.0.1 -i --tty --entrypoint /entrypoint/entrypoint.sh my_project_web
0e33301d30c0ad27c9a7f39c51a7c6f3a17a27a4ef63d39e0a1dd36ee9a3ec2c
0
podman create --name=my_project_api_mail --pod=my_project --label io.podman.compose.config-hash=123 --label io.podman.compose.project=my_project --label io.podman.compose.version=0.0.1 --label com.docker.compose.container-number=1 --label com.docker.compose.service=mailprocessor -e MYSQL_HOST=mariadb -e MYSQL_DATABASE=my_project_api -e MYSQL_ROOT_PASSWORD=Fr3nsp@ss -e MYSQL_USER=my_project_api -e MYSQL_PASSWORD=Fr3nsp@ss -e DJANGO_SETTINGS_MODULE=project.settings.docker -e CQLENG_ALLOW_SCHEMA_MANAGEMENT='True' -e KAFKA_HOST=kafka -e KAFKA_PORT=9092 --mount type=bind,source=/home/user.local/repos/my_project/./docker/process_mail,destination=/entrypoint --mount type=bind,source=/home/user.local/repos/my_project/./docker/web/jupyter,destination=/root/.jupyter/ --mount type=bind,source=/home/user.local/repos/my_project/./docker/models,destination=/models --mount type=bind,source=/home/user.local/repos/my_project/./web,destination=/code --mount type=bind,source=/home/user.local/repos/my_project/./web/static,destination=/static --mount type=bind,source=/home/user.local/repos/my_project/./web/media,destination=/media --mount type=bind,source=/home/user.local/repos/my_project/./docker/logs,destination=/logs --mount type=bind,source=/home/user.local/repos/my_project/./docker/mail,destination=/mail --add-host web:127.0.0.1 --add-host my_project_api_web:127.0.0.1 --add-host webhookconsumer:127.0.0.1 --add-host my_project_api_webhook_consumer:127.0.0.1 --add-host classifier:127.0.0.1 --add-host my_project_api_classifier:127.0.0.1 --add-host mariadb:127.0.0.1 --add-host my_project_api_mariadb:127.0.0.1 --add-host mailprocessor:127.0.0.1 --add-host my_project_api_mail:127.0.0.1 --add-host elasticsearch:127.0.0.1 --add-host my_project_elasticsearch_1:127.0.0.1 --add-host logstash:127.0.0.1 --add-host my_project_logstash_1:127.0.0.1 --add-host kibana:127.0.0.1 --add-host my_project_kibana_1:127.0.0.1 --add-host filebeat:127.0.0.1 --add-host my_project_filebeat_1:127.0.0.1 -i --tty --entrypoint /entrypoint/entrypoint.sh my_project_mailprocessor
0
podman create --name=my_project_logstash_1 --pod=my_project --label io.podman.compose.config-hash=123 --label io.podman.compose.project=my_project --label io.podman.compose.version=0.0.1 --label com.docker.compose.container-number=1 --label com.docker.compose.service=logstash -e http.host="0.0.0.0" -e xpack.monitoring.elasticsearch.hosts=["http://elasticsearch:9200"] -e xpack.monitoring.enabled=false -e discovery.type=single-node -e xpack.security.enabled=false --mount type=bind,source=/home/user.local/repos/my_project/./docker/logstash/logstash.conf,destination=/usr/share/logstash/pipeline/logstash.conf,ro --add-host web:127.0.0.1 --add-host my_project_api_web:127.0.0.1 --add-host webhookconsumer:127.0.0.1 --add-host my_project_api_webhook_consumer:127.0.0.1 --add-host classifier:127.0.0.1 --add-host my_project_api_classifier:127.0.0.1 --add-host mariadb:127.0.0.1 --add-host my_project_api_mariadb:127.0.0.1 --add-host mailprocessor:127.0.0.1 --add-host my_project_api_mail:127.0.0.1 --add-host elasticsearch:127.0.0.1 --add-host my_project_elasticsearch_1:127.0.0.1 --add-host logstash:127.0.0.1 --add-host my_project_logstash_1:127.0.0.1 --add-host kibana:127.0.0.1 --add-host my_project_kibana_1:127.0.0.1 --add-host filebeat:127.0.0.1 --add-host my_project_filebeat_1:127.0.0.1 docker.elastic.co/logstash/logstash:7.5.2 logstash -f /usr/share/logstash/pipeline/logstash.conf --config.reload.automatic
6622332b35a9a467d693e86a87d792613d6a752621a983e1fbf855aa9e57b2a3
0
podman create --name=my_project_api_webhook_consumer --pod=my_project --label io.podman.compose.config-hash=123 --label io.podman.compose.project=my_project --label io.podman.compose.version=0.0.1 --label com.docker.compose.container-number=1 --label com.docker.compose.service=webhookconsumer -e MYSQL_HOST=mariadb -e MYSQL_DATABASE=my_project_api -e MYSQL_ROOT_PASSWORD=Fr3nsp@ss -e MYSQL_USER=my_project_api -e MYSQL_PASSWORD=Fr3nsp@ss -e DJANGO_SETTINGS_MODULE=project.settings.docker -e CQLENG_ALLOW_SCHEMA_MANAGEMENT='True' -e KAFKA_HOST=kafka -e KAFKA_PORT=9092 --mount type=bind,source=/home/user.local/repos/my_project/./docker/webhook_consumer,destination=/entrypoint --mount type=bind,source=/home/user.local/repos/my_project/./web,destination=/code --mount type=bind,source=/home/user.local/repos/my_project/./web/static,destination=/static --mount type=bind,source=/home/user.local/repos/my_project/./web/media,destination=/media --mount type=bind,source=/home/user.local/repos/my_project/./docker/logs,destination=/logs --add-host web:127.0.0.1 --add-host my_project_api_web:127.0.0.1 --add-host webhookconsumer:127.0.0.1 --add-host my_project_api_webhook_consumer:127.0.0.1 --add-host classifier:127.0.0.1 --add-host my_project_api_classifier:127.0.0.1 --add-host mariadb:127.0.0.1 --add-host my_project_api_mariadb:127.0.0.1 --add-host mailprocessor:127.0.0.1 --add-host my_project_api_mail:127.0.0.1 --add-host elasticsearch:127.0.0.1 --add-host my_project_elasticsearch_1:127.0.0.1 --add-host logstash:127.0.0.1 --add-host my_project_logstash_1:127.0.0.1 --add-host kibana:127.0.0.1 --add-host my_project_kibana_1:127.0.0.1 --add-host filebeat:127.0.0.1 --add-host my_project_filebeat_1:127.0.0.1 -i --tty --entrypoint /entrypoint/entrypoint_podman.sh my_project_webhookconsumer
159dcfc59b26c48ae0277c64744fa32b610178b68a5b93f678965fbe55018b7b
0
podman create --name=my_project_api_classifier --pod=my_project --label io.podman.compose.config-hash=123 --label io.podman.compose.project=my_project --label io.podman.compose.version=0.0.1 --label com.docker.compose.container-number=1 --label com.docker.compose.service=classifier -e MYSQL_HOST=mariadb -e MYSQL_DATABASE=my_project_api -e MYSQL_ROOT_PASSWORD=Fr3nsp@ss -e MYSQL_USER=my_project_api -e MYSQL_PASSWORD=Fr3nsp@ss -e DJANGO_SETTINGS_MODULE=project.settings.docker -e CQLENG_ALLOW_SCHEMA_MANAGEMENT='True' -e KAFKA_HOST=kafka -e KAFKA_PORT=9092 -e CLF_MODELS_PATH=/models/ --mount type=bind,source=/home/user.local/repos/my_project/./docker/classifier,destination=/entrypoint --mount type=bind,source=/home/user.local/repos/my_project/./web,destination=/code --mount type=bind,source=/home/user.local/repos/my_project/./web/static,destination=/static --mount type=bind,source=/home/user.local/repos/my_project/./web/media,destination=/media --mount type=bind,source=/home/user.local/repos/my_project/./docker/models,destination=/models --mount type=bind,source=/home/user.local/repos/my_project/./docker/resources,destination=/resources --mount type=bind,source=/home/user.local/repos/my_project/./docker/logs,destination=/logs --add-host web:127.0.0.1 --add-host my_project_api_web:127.0.0.1 --add-host webhookconsumer:127.0.0.1 --add-host my_project_api_webhook_consumer:127.0.0.1 --add-host classifier:127.0.0.1 --add-host my_project_api_classifier:127.0.0.1 --add-host mariadb:127.0.0.1 --add-host my_project_api_mariadb:127.0.0.1 --add-host mailprocessor:127.0.0.1 --add-host my_project_api_mail:127.0.0.1 --add-host elasticsearch:127.0.0.1 --add-host my_project_elasticsearch_1:127.0.0.1 --add-host logstash:127.0.0.1 --add-host my_project_logstash_1:127.0.0.1 --add-host kibana:127.0.0.1 --add-host my_project_kibana_1:127.0.0.1 --add-host filebeat:127.0.0.1 --add-host my_project_filebeat_1:127.0.0.1 -i --tty --entrypoint /entrypoint/entrypoint.sh my_project_classifier
a6cd8e006b227b89224ea971ff6c169c5796715f63a96b3507326c26b214bec8
0
podman create --name=my_project_kibana_1 --pod=my_project --label io.podman.compose.config-hash=123 --label io.podman.compose.project=my_project --label io.podman.compose.version=0.0.1 --label com.docker.compose.container-number=1 --label com.docker.compose.service=kibana -e discovery.type=single-node -e xpack.security.enabled=true -e ELASTIC_PASSWORD=changeme --mount type=bind,source=/home/user.local/repos/my_project/./docker/kibana/kibana.yml,destination=/usr/share/kibana/config/kibana.yml --add-host web:127.0.0.1 --add-host my_project_api_web:127.0.0.1 --add-host webhookconsumer:127.0.0.1 --add-host my_project_api_webhook_consumer:127.0.0.1 --add-host classifier:127.0.0.1 --add-host my_project_api_classifier:127.0.0.1 --add-host mariadb:127.0.0.1 --add-host my_project_api_mariadb:127.0.0.1 --add-host mailprocessor:127.0.0.1 --add-host my_project_api_mail:127.0.0.1 --add-host elasticsearch:127.0.0.1 --add-host my_project_elasticsearch_1:127.0.0.1 --add-host logstash:127.0.0.1 --add-host my_project_logstash_1:127.0.0.1 --add-host kibana:127.0.0.1 --add-host my_project_kibana_1:127.0.0.1 --add-host filebeat:127.0.0.1 --add-host my_project_filebeat_1:127.0.0.1 docker.elastic.co/kibana/kibana:7.5.2
c7de766641de8c12ad1f66aa9d3742b27a5a3ee1dd539e2b5b8ce92355dd2c06
0
podman create --name=my_project_filebeat_1 --pod=my_project --label io.podman.compose.config-hash=123 --label io.podman.compose.project=my_project --label io.podman.compose.version=0.0.1 --label com.docker.compose.container-number=1 --label com.docker.compose.service=filebeat --mount type=bind,source=/home/user.local/repos/my_project/./docker/filebeat/filebeat.yml,destination=/usr/share/filebeat/filebeat.yml.orig,ro --mount type=bind,source=/home/user.local/repos/my_project/./docker/logs,destination=/logs --mount type=bind,source=/home/user.local/repos/my_project/./docker/elk/filebeat,destination=/usr/share/filebeat/data/ --add-host web:127.0.0.1 --add-host my_project_api_web:127.0.0.1 --add-host webhookconsumer:127.0.0.1 --add-host my_project_api_webhook_consumer:127.0.0.1 --add-host classifier:127.0.0.1 --add-host my_project_api_classifier:127.0.0.1 --add-host mariadb:127.0.0.1 --add-host my_project_api_mariadb:127.0.0.1 --add-host mailprocessor:127.0.0.1 --add-host my_project_api_mail:127.0.0.1 --add-host elasticsearch:127.0.0.1 --add-host my_project_elasticsearch_1:127.0.0.1 --add-host logstash:127.0.0.1 --add-host my_project_logstash_1:127.0.0.1 --add-host kibana:127.0.0.1 --add-host my_project_kibana_1:127.0.0.1 --add-host filebeat:127.0.0.1 --add-host my_project_filebeat_1:127.0.0.1 -u root docker.elastic.co/beats/filebeat:7.5.2 bash -c cp -fR /usr/share/filebeat/filebeat.yml.orig /usr/share/filebeat/filebeat.yml && chmod go-w /usr/share/filebeat/filebeat.yml && filebeat -e -E -strict.perms=false --config.reload.automatic
5764b5c98f448d9e9c0a2b2dfca80c600cc2372ecbca7b80e8e0a70e7230ce15
0

Does this look correct to you, @AkihiroSuda?

@barseghyanartur
Copy link
Author

barseghyanartur commented Feb 12, 2020

OK, the issue solved locally, however, with tricks that were not necessary in Podman 1.6.0.

I need to forcibly stop all containers (even if podman ps does not show anything up) with podman stop --all before running podman-compose up.

Thus:

podman-compose down
podman stop --all
podman-compose up

Then it works.

@mheon
Copy link
Member

mheon commented Feb 12, 2020

What? That's... rather bizarre.

@cbz
Copy link

cbz commented Feb 12, 2020

As above, I don't think this is necessarily a solution - it appears that the behaviour is intermittent, and I assume in this case it worked whereas in other cases it hasn't and stopping the containers this way is just irrelevant.

@barseghyanartur
Copy link
Author

barseghyanartur commented Feb 12, 2020

I'm not saying the issue is globally solved or no longer relevant. I reported symptoms and a workaround to be picked up by someone who has deep(er) understanding of Podman internals.

@cbz
Copy link

cbz commented Feb 12, 2020

The point is that I'm unconvinced that it is a workaround, as opposed to something in the middle of a set of intermittent failures. I have also managed to reproduce the problem with a fresh container:

√ podman run --name tbw -v /tmp/bw-data:/data -p 7080:80 bitwardenrs/server:alpine
✗ curl http://localhost:7080/
curl: (7) Failed to connect to localhost port 7080: Connection refused
√ podman ps 
CONTAINER ID  IMAGE                                      COMMAND        CREATED         STATUS                 PORTS                    NAMES
9e9e0dcbac9b  docker.io/bitwardenrs/server:alpine        /bitwarden_rs  34 seconds ago  Up 33 seconds ago      0.0.0.0:7080->80/tcp     tbw
√ podman exec -it 9e9 /bin/sh
/ # netstat -nltp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:7080            0.0.0.0:*               LISTEN      -
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      1/bitwarden_rs
/ # curl http://localhost:7080/
<!DOCTYPE html>
<html>

<head>
...

So in this case the port mapping has again been created inside the container - rather than exposed outside.

@mheon
Copy link
Member

mheon commented Feb 12, 2020

So you're seeing an open port inside the container, but not on the host, for 7080? That sounds like a bug with the port forwarder

@cbz
Copy link

cbz commented Feb 12, 2020

So you're seeing an open port inside the container, but not on the host, for 7080? That sounds like a bug with the port forwarder

Yep - though it's intermittent - if I restart a few times I'll get the port forwarder on the host some of the time.

@mheon
Copy link
Member

mheon commented Feb 12, 2020

That definitely sounds like a RootlessKit bug. Can you provide more details about your environment - OS, Podman version? @AkihiroSuda is there any additional debugging info we can get for debugging port forwarding?

@barseghyanartur
Copy link
Author

@cbz

FYI, that workaround worked well for me; at least for a couple of hours.

@cbz
Copy link

cbz commented Feb 12, 2020

That definitely sounds like a RootlessKit bug. Can you provide more details about your environment - OS, Podman version? @AkihiroSuda is there any additional debugging info we can get for debugging port forwarding?

√ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.4 LTS
Release:        18.04
Codename:       bionic
√ podman info
host:
  BuildahVersion: 1.12.0
  CgroupVersion: v1
  Conmon:
    package: 'conmon: /usr/libexec/podman/conmon'
    path: /usr/libexec/podman/conmon
    version: 'conmon version 2.0.10, commit: unknown'
  Distribution:
    distribution: ubuntu
    version: "18.04"
  IDMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
  MemFree: 23035904
  MemTotal: 504590336
  OCIRuntime:
    name: runc
    package: 'runc: /usr/sbin/runc'
    path: /usr/sbin/runc
    version: 'runc version spec: 1.0.1-dev'
  SwapFree: 448262144
  SwapTotal: 533721088
  arch: amd64
  cpus: 1
  eventlogger: journald
  hostname: XXX
  kernel: 4.15.0-76-generic
  os: linux
  rootless: true
  slirp4netns:
    Executable: /usr/bin/slirp4netns
    Package: 'slirp4netns: /usr/bin/slirp4netns'
    Version: |-
      slirp4netns version 0.4.3
      commit: unknown
  uptime: 319h 41m 20.17s (Approximately 13.29 days)
registries:
  search:
  - docker.io
  - quay.io
store:
  ConfigFile: /home/XXX/.config/containers/storage.conf
  ContainerStore:
    number: 5
  GraphDriverName: vfs
  GraphOptions: {}
  GraphRoot: /home/XXX/.local/share/containers/storage
  GraphStatus: {}
  ImageStore:
    number: 6
  RunRoot: /run/user/1000
  VolumePath: /home/XXX/.local/share/containers/storage/volumes

@pdfrod
Copy link

pdfrod commented Feb 12, 2020

Since upgrading to Podman v1.8.0 I've also started having this issue in two different machines (both running Ubuntu 19.10), so I had to downgrade to v1.7.0.

I can consistently reproduce the issue like this:

> podman run --rm -d -ti -p 8000:8000 --userns=keep-id python:2.7-alpine python -m SimpleHTTPServer
2a8fc5c6b3917741e6d49207c2545cdc5d64773fa50c216642a356791852cc1e

> curl -I localhost:8000
curl: (7) Failed to connect to localhost port 8000: Connection refused

However if I remove either the -d or the --userns=keep-id flags, it works:

> podman run --rm -d -ti -p 8000:8000 python:2.7-alpine python -m SimpleHTTPServer
ec84418f635af5a0f9b07156b8537cf8ec7f868b9017d02a08c1fad569df5f4d

> curl -I localhost:8000
HTTP/1.0 200 OK
Server: SimpleHTTP/0.6 Python/2.7.17
Date: Wed, 12 Feb 2020 22:44:47 GMT
Content-type: text/html; charset=UTF-8
Content-Length: 666

@AkihiroSuda
Copy link
Collaborator

Thanks for the report, looks like lockosthread issue https://github.com/containers/libpod/blob/5ea6cad20c9659da9bae38a660da584ee2b58aec/pkg/rootlessport/rootlessport_linux.go#L157

@AkihiroSuda AkihiroSuda changed the title [podman-compose specific?] /kind bug Unable to access application publically/outside after exposing port with podman [v1.8] /kind bug Unable to access application publically/outside after exposing port with podman Feb 13, 2020
@AkihiroSuda
Copy link
Collaborator

#5167 (comment) is reproducible to me, the exit FD seems somehow closed immediately.

https://github.com/containers/libpod/blob/2ced9094d4728dd09f60a177faa32339a8d0f721/pkg/rootlessport/rootlessport_linux.go#L196-L201

@AkihiroSuda
Copy link
Collaborator

@giuseppe Do you have an idea?

@giuseppe
Copy link
Member

@giuseppe Do you have an idea?

the issue seems to happen only with -d.

Do we inject the other end of exitFD inside conmon? That will be the way to keep it alive

giuseppe added a commit to giuseppe/libpod that referenced this issue Feb 18, 2020
when using -d and port mapping, make sure the correct fd is injected
into conmon.

Move the pipe creation earlier as the fd must be known at the time we
create the container through conmon.

Closes: containers#5167

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
@giuseppe
Copy link
Member

PR here: #5245

giuseppe added a commit to giuseppe/libpod that referenced this issue Feb 18, 2020
when using -d and port mapping, make sure the correct fd is injected
into conmon.

Move the pipe creation earlier as the fd must be known at the time we
create the container through conmon.

Closes: containers#5167

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
@AkihiroSuda
Copy link
Collaborator

The NetNS race seems another issue, opened a new issue #5249

snj33v pushed a commit to snj33v/libpod that referenced this issue May 31, 2020
when using -d and port mapping, make sure the correct fd is injected
into conmon.

Move the pipe creation earlier as the fd must be known at the time we
create the container through conmon.

Closes: containers#5167

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
@hxcan
Copy link

hxcan commented Aug 18, 2021

podman stop --all

That solved my problem.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 21, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 21, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. rootless
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants