Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error running container: error from /usr/bin/crun creating container for [/bin/sh -c pip install -r requirements.txt]: mount /proc to /proc: Operation not permitted #10864

Closed
sachinkaushik opened this issue Jul 6, 2021 · 75 comments
Assignees
Labels
locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. podman-in-container stale-issue

Comments

@sachinkaushik
Copy link

sachinkaushik commented Jul 6, 2021

Hi Team,

I have created a running rootless openshift container using a Dockerfile. I followed below link for creating Rootless Podman without the privileged flag. I'm able to build java spring application but when I try to build python application using Dockerfile that has pip install then I'm getting below error. Can you please let us know what else config required to resolve below error?

https://www.redhat.com/sysadmin/podman-inside-kubernetes

error running container: error from /usr/bin/crun creating container for [/bin/sh -c pip install -r requirements.txt]: mount /proc to /proc: Operation not permitted

    • If there is a "pip install" command in a Dockerfile, then Podman build fails with error " mount /proc to /proc: Operation not permitted"
    • Podman build creates docker image, if Dockerfile does not have "pip install" command

podman --version :: podman version 3.2.2

podman info ::

host:
arch: amd64
buildahVersion: 1.21.0
cgroupControllers: []
cgroupManager: cgroupfs
cgroupVersion: v1
conmon:
package: conmon-2.0.27-2.fc34.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.0.27, commit: '
cpus: 12
distribution:
distribution: fedora
version: "34"
eventLogger: file
hostname: cliservice-7dff79cbd7-n7krd
idMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 10000
size: 5000
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 10000
size: 5000
kernel: 4.18.0-240.22.1.el8_3.x86_64
linkmode: dynamic
memFree: 55972347904
memTotal: 67230187520
ociRuntime:
name: crun
package: crun-0.20.1-1.fc34.x86_64
path: /usr/bin/crun
version: |-
crun version 0.20.1
commit: 0d42f1109fd73548f44b01b3e84d04a279e99d2e
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
remoteSocket:
path: /tmp/podman-run-1000/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: false
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.1.9-1.fc34.x86_64
version: |-
slirp4netns version 1.1.8+dev
commit: 6dc0186e020232ae1a6fcc1f7afbc3ea02fd3876
libslirp: 4.4.0
SLIRP_CONFIG_VERSION_MAX: 3
libseccomp: 2.5.0
swapFree: 0
swapTotal: 0
uptime: 21h 24m 42.97s (Approximately 0.88 days)
registries:
default-route-openshift-image-registry.apps.cfa.devcloud.intel.com:
Blocked: false
Insecure: true
Location: default-route-openshift-image-registry.apps.cfa.devcloud.intel.com
MirrorByDigestOnly: false
Mirrors: []
Prefix: default-route-openshift-image-registry.apps.cfa.devcloud.intel.com
quay.io:
Blocked: false
Insecure: true
Location: quay.io
MirrorByDigestOnly: false
Mirrors: []
Prefix: quay.io
search:

  • registry.fedoraproject.org
  • registry.access.redhat.com
  • registry.centos.org
  • docker.io
  • quay.io
    store:
    configFile: /home/podman/.config/containers/storage.conf
    containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
    graphDriverName: overlay
    graphOptions:
    overlay.mount_program:
    Executable: /usr/bin/fuse-overlayfs
    Package: fuse-overlayfs-1.5.0-1.fc34.x86_64
    Version: |-
    fusermount3 version: 3.10.4
    fuse-overlayfs: version 1.5
    FUSE library version 3.10.4
    using FUSE kernel interface version 7.31
    graphRoot: /home/podman/.local/share/containers/storage
    graphStatus:
    Backing Filesystem: overlayfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
    imageStore:
    number: 5
    runRoot: /tmp/podman-run-1000/containers
    volumePath: /home/podman/.local/share/containers/storage/volumes
    version:
    APIVersion: 3.2.2
    Built: 1624664959
    BuiltTime: Fri Jun 25 23:49:19 2021
    GitCommit: ""
    GoVersion: go1.16.4
    OsArch: linux/amd64
    Version: 3.2.2

------------------------------------------------------Dockerfile- Start-------------------------------------------

FROM quay.io/podman/stable:latest

RUN touch /etc/subgid /etc/subuid
&& chmod g=u /etc/subgid /etc/subuid /etc/passwd
&& echo podman:10000:5000 > /etc/subuid
&& echo podman:10000:5000 > /etc/subgid

RUN yum install -y
python3-pip
python3 python3-wheel
git
java-11-openjdk.x86_64

RUN pip install jupyterlab

ARG MAVEN_VERSION=3.8.1
ARG BASE_URL=https://apache.osuosl.org/maven/maven-3/${MAVEN_VERSION}/binaries

RUN mkdir -p /usr/share/maven /usr/share/maven/ref
&& curl -fsSL -o /tmp/apache-maven.tar.gz ${BASE_URL}/apache-maven-${MAVEN_VERSION}-bin.tar.gz
&& tar -xzf /tmp/apache-maven.tar.gz -C /usr/share/maven --strip-components=1
&& rm -f /tmp/apache-maven.tar.gz
&& ln -s /usr/share/maven/bin/mvn /usr/bin/mvn
&& yum install wget -y
&& yum install unzip -y
&& wget -q https://services.gradle.org/distributions/gradle-3.3-bin.zip
&& unzip gradle-3.3-bin.zip -d /opt
&& rm gradle-3.3-bin.zip

ENV JAVA_HOME /usr/lib/jvm/jre-11-openjdk/
ENV MAVEN_HOME /usr/share/maven
ENV GRADLE_HOME /opt/gradle-3.3
ENV PATH $PATH:/opt/gradle-3.3/bin

COPY registries.conf /etc/containers/
COPY login-script.sh /etc/containers/
RUN chmod -R 777 /etc/containers/login-script.sh
USER podman

WORKDIR /data

ENTRYPOINT ["/etc/containers/login-script.sh"]

-------------------------------------------Dockerfile End-------------------------------------------

podman - proc

@sachinkaushik
Copy link
Author

Hi Team,

Any update on this...?

@baude
Copy link
Member

baude commented Jul 7, 2021

@rhatdan @umohnani8 ptal

@flouthoc
Copy link
Collaborator

flouthoc commented Jul 8, 2021

Hi @sachinkaushik is this rootless container being invoked from another rootless/non-root container ? Could you try adding this to your podman command --security-opt seccomp=unconfined --cap-add all ?

@flouthoc
Copy link
Collaborator

flouthoc commented Jul 8, 2021

also afaik parent container has to be privileged and must mount parts of /proc with relevant uid,gid for nested rootless container to be able to perform mount on procfs i am not sure about it though. @sachinkaushik Could you please try with privileged: true if above suggested methods don't work.

@sachinkaushik
Copy link
Author

sachinkaushik commented Jul 8, 2021

HI @flouthoc ,

Thank you for response..!!!

We have created a Container Image using below Dockerfile by docker build -t . command. And now this container image we are running as rootless(we have podman User in dockerfile) container in Openshift.

This is a rootless container running in openshift. Now if we try to build python application Dockerfile that is having pip install command then only we are getting error mentioned in issue subject.

Using below Dockerfile we have created Container Image and same we have deployed in Openshift and that is running as rootless container and inside this we are trying build python application.

------------------------------------Dockerfile start------------------------------------------------------

FROM quay.io/podman/stable:latest

RUN touch /etc/subgid /etc/subuid
&& chmod g=u /etc/subgid /etc/subuid /etc/passwd
&& echo podman:10000:5000 > /etc/subuid
&& echo podman:10000:5000 > /etc/subgid

RUN yum install -y
python3-pip
python3 python3-wheel

RUN pip install jupyterlab

ENV PATH $PATH:/opt/gradle-3.3/bin

COPY registries.conf /etc/containers/
COPY login-script.sh /etc/containers/
RUN chmod -R 777 /etc/containers/login-script.sh
USER podman

WORKDIR /data

ENTRYPOINT ["/etc/containers/login-script.sh"]

----------------------------------------------Dockerfile end------------------------------------------------------

Note : We have to give less privileged to User.

@flouthoc
Copy link
Collaborator

flouthoc commented Jul 8, 2021

@sachinkaushik oh its fine if you don't want to try privileged: true but could you try this: podman build --security-opt seccomp=unconfined --cap-add all -t <image-name> . and tell me the output ?

@sachinkaushik
Copy link
Author

sachinkaushik commented Jul 8, 2021

Hi @flouthoc ,

I just try it and getitng same error.

podman build --security-opt seccomp=unconfined --cap-add all -t python-image .

STEP 5: RUN pip install -r requirements.txt
error running container: error from /usr/bin/crun creating container for [/bin/sh -c pip install -r requirements.txt]: mount /proc to /proc: Operation not permitted

podman-build-error

I followed Rootless Podman without the privileged flag article.

https://www.redhat.com/sysadmin/podman-inside-kubernetes

@flouthoc
Copy link
Collaborator

flouthoc commented Jul 8, 2021

@sachinkaushik and just for a try what happens when you set privileged: true on the pod config ?

@sachinkaushik
Copy link
Author

@flouthoc We have created s SCC and in that we have allowPrivilegedContainer: false . Do you want us to set value of allowPrivilegedContainer as true ?

@sachinkaushik
Copy link
Author

@flouthoc We tried setting up value of allowPrivilegedContainer as true. But still no luck.

STEP 5: RUN pip install -r requirements.txt
error running container: error from /usr/bin/crun creating container for [/bin/sh -c pip install -r requirements.txt]: mount /proc to /proc: Operation not permitted

@sachinkaushik
Copy link
Author

HI @rhatdan , @flouthoc ,

Any update on this?

@flouthoc
Copy link
Collaborator

flouthoc commented Jul 9, 2021

@sachinkaushik I was not able to spend time on this yesterday will probably re-create this on my end and will try a few things. btw when you tried allowPrivilegedContainer: true did you update your defined SCC as well ?

@sachinkaushik
Copy link
Author

Hi @flouthoc ,

Yes we updated.

We are using Service Account and that SA is bind with Role. And that role is having a below SCC.

------------------------------------------------------------SCC Start-------------------------------------
allowHostPorts: false
priority: 10
requiredDropCapabilities:

  • MKNOD
  • KILL
  • DAC_OVERRIDE
  • NET_RAW
  • SETPCAP
  • SETFCAP
  • FSETID
  • SYS_CHROOT
    allowPrivilegedContainer: true
    runAsUser:
    type: RunAsAny
    users: []
    allowHostDirVolumePlugin: true
    allowHostIPC: false
    seLinuxContext:
    type: MustRunAs
    readOnlyRootFilesystem: false
    metadata:
    annotations:
    include.release.openshift.io/self-managed-high-availability: 'true'
    creationTimestamp: '2021-05-31T05:35:10Z'
    generation: 4
    name: intel-devcloud-privileged-scc
    fsGroup:
    type: MustRunAs
    groups:
  • 'system:cluster-admins'
    kind: SecurityContextConstraints
    defaultAddCapabilities: null
    supplementalGroups:
    type: RunAsAny
    volumes:
  • awsElasticBlockStore
  • azureDisk
  • azureFile
  • cephFS
  • cinder
  • configMap
  • csi
  • downwardAPI
  • emptyDir
  • ephemeral
  • fc
  • flexVolume
  • flocker
  • gcePersistentDisk
  • gitRepo
  • glusterfs
  • hostPath
  • iscsi
  • nfs
  • persistentVolumeClaim
  • photonPersistentDisk
  • portworxVolume
  • projected
  • quobyte
  • rbd
  • scaleIO
  • secret
  • storageOS
  • vsphere
    allowHostPID: false
    allowHostNetwork: false
    allowPrivilegeEscalation: true
    apiVersion: security.openshift.io/v1
    allowedCapabilities: null
    ---------------------------------------------------SCC End--------------------------------------------------------------

@rhatdan
Copy link
Member

rhatdan commented Jul 9, 2021

The issue is the outer container has setup /proc with certain read/only mounts and mounted over parts of /proc, When running podman container inside it tries to modify /proc mount and the kernel does not allow this. So you can either do an --unmask=/proc/* or --unmask=all on the outside container. or volume mount -v /proc:/proc on the inside container. (I believe).

@giuseppe WDYT?

@sachinkaushik
Copy link
Author

sachinkaushik commented Jul 12, 2021

HI @rhatdan / @flouthoc ,

I tried above things but still no luck...

error running container: error from /usr/bin/crun creating container for [/bin/sh -c pip install -r requirements.txt]: mount /proc to /proc: Operation not permitted

I think there is problem with crun, it doesn't have permission to mount proc.

We have below dockerfile and trying to build container image of it. But step 5 <RUN pip install -r requirements.txt> gives error.

-----------------------Docker file--------------------

FROM python:3-alpine
MAINTAINER Sachin Sharma
WORKDIR /service
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . ./
EXPOSE 8080
ENTRYPOINT ["python3", "app.py",]

@flouthoc
Copy link
Collaborator

flouthoc commented Jul 12, 2021

@sachinkaushik I tried recreating your use-case podman build -t img-python . with one of your repos https://github.com/sachinkaushik/hello-world-python.git inside a rootful privileged podman container started using sudo podman run --privileged quay.io/podman/stable sleep 100000000 but everything worked completely fine for me. Sharing the complete output of build inside a container https://paste.ubuntu.com/p/c4Mh99dScd/

I did this by-the-way.

  • Start a privileged container using sudo podman run --privileged quay.io/podman/stable sleep 100000000
  • Exec into container sudo podman exec -it <name> bash
  • Install git sudo dnf install git-all
  • Clone the repo git clone https://github.com/sachinkaushik/hello-world-python.git
  • Build image podman build -t img-python .

Podman version: 3.3.0-dev
Crun version: 0.20.1.17-0b0b

@flouthoc
Copy link
Collaborator

flouthoc commented Jul 12, 2021

Same case for rootless privileged container started using podman run --privileged quay.io/podman/stable sleep 100000000. I am unable to reproduce this case everything is working just fine for me.

Steps i did

  • Start a privileged container using podman run --privileged quay.io/podman/stable sleep 100000000
  • Exec into container podman exec -it <name> bash
  • Install git sudo dnf install git-all
  • Clone the repo git clone https://github.com/sachinkaushik/hello-world-python.git
  • Build image podman build -t img-python .

Podman version: 3.3.0-dev
Crun version: 0.20.1.17-0b0b

@sachinkaushik
Copy link
Author

sachinkaushik commented Jul 12, 2021

HI @flouthoc / @rhatdan ,

This is working as rootful container our end also. But when Im running as rootless container then getting below error. This is new error now. We have priviledge true as well in Pod YAML file. Please help me here what else config I'm missing to add.

securityContext:
privileged: true

Error :

STEP 5: RUN pip install -r requirements.txt
error running container: error from /usr/bin/crun creating container for [/bin/sh -c pip install -r requirements.txt]: newgidmap: gid range [0-4294967295) -> [0-4294967295) not allowed
writing file /proc/248/gid_map: Invalid argument

Im creating container using below docker file.


FROM quay.io/podman/stable:latest

RUN yum install -y
python3-pip
python3 python3-wheel
git
java-11-openjdk.x86_64

RUN pip install jupyterlab

USER podman

WORKDIR /data

ENTRYPOINT ["jupyter", "lab", "--port=8888", "--no-browser", "--ip=0.0.0.0", "--allow-root"]


Working with root User.

podman-rrot

@umohnani8
Copy link
Member

@sachinkaushik have you tried the build with the --isolation=chroot flag as the article says? That should fix the permisison denied you are getting for mounting /proc. The chroot isolation helps bypass some of the things that the kernel denies mounting in the default isolation setting. So far builds in a rootless unprivileged container only works with --isolation=chroot.

Another option would be to run your pod in a user namespace using the cri-o runtime class method. The steps for doing this is in the article as well.

Article: https://www.redhat.com/sysadmin/podman-inside-kubernetes

@sachinkaushik
Copy link
Author

sachinkaushik commented Jul 12, 2021

@umohnani8 Now im not getting proc mount error. Now there is below error when I'm running as rootless container. I have added subuid and subguid in dockerfile as well, as mentioned in article.

error running container: error from /usr/bin/crun creating container for [/bin/sh -c pip install -r requirements.txt]: newgidmap: gid range [0-4294967295) -> [0-4294967295) not allowed
writing file /proc/234/gid_map: Invalid argument
: exit status 1

Error

And if try with--isolation=chroot flag, using below command getting another error.

podman build --isolation chroot -t demo

STEP 5: RUN pip install -r requirements.txt
error setting capabilities for process: error reading capabilities of current process: open /proc/self/status: permission denied
subprocess exited with status 1

Dockerfile :

FROM quay.io/podman/stable:latest

RUN echo umohnani:100000:65536 > /etc/subuid;
echo containers:200000:268435456 > /etc/subuid;
echo umohnani:100000:65536 > /etc/subgid;
echo containers:200000:268435456 > /etc/subgid;

RUN yum install -y
python3-pip
python3 python3-wheel
git
java-11-openjdk.x86_64

RUN pip install jupyterlab

COPY login-script.sh /etc/containers/
RUN chmod -R 777 /etc/containers/login-script.sh

USER podman

WORKDIR /data

ENTRYPOINT ["jupyter", "lab", "--port=8888", "--no-browser", "--ip=0.0.0.0", "--allow-root"]

jupyter lab --port=8888 --no-browser --ip=0.0.0.0 --allow-root


@flouthoc
Copy link
Collaborator

@sachinkaushik Could you please remove the lines from you Containerfile/Dockerfile

RUN echo umohnani:100000:65536 > /etc/subuid;
echo containers:200000:268435456 > /etc/subuid;
echo umohnani:100000:65536 > /etc/subgid;
echo containers:200000:268435456 > /etc/subgid; 

and try

@sachinkaushik
Copy link
Author

@flouthoc first I tried without above lines from dockerfile for rootless container. I was getting below error. Then I tried with above lines but still there was same error.

error running container: error from /usr/bin/crun creating container for [/bin/sh -c pip install -r requirements.txt]: newgidmap: gid range [0-4294967295) -> [0-4294967295) not allowed
writing file /proc/234/gid_map: Invalid argument

@flouthoc
Copy link
Collaborator

Could you please share idmappings of your podman, you can get that by podman info

@sachinkaushik
Copy link
Author

sachinkaushik commented Jul 13, 2021

@flouthoc Please find below output of podman info

podman info ::

host:
arch: amd64
buildahVersion: 1.21.0
cgroupControllers: []
cgroupManager: cgroupfs
cgroupVersion: v1
conmon:
package: conmon-2.0.27-2.fc34.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.0.27, commit: '
cpus: 12
distribution:
distribution: fedora
version: "34"
eventLogger: file
hostname: cliservice-874f8bb78-rt7wt
idMappings:
gidmap:
- container_id: 0
host_id: 0
size: 4294967295
uidmap:
- container_id: 0
host_id: 0
size: 4294967295
kernel: 4.18.0-240.22.1.el8_3.x86_64
linkmode: dynamic
memFree: 52187860992
memTotal: 67230187520
ociRuntime:
name: crun
package: crun-0.20.1-1.fc34.x86_64
path: /usr/bin/crun
version: |-
crun version 0.20.1
commit: 0d42f1109fd73548f44b01b3e84d04a279e99d2e
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
remoteSocket:
path: /run/user/1000/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: false
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.1.9-1.fc34.x86_64
version: |-
slirp4netns version 1.1.8+dev
commit: 6dc0186e020232ae1a6fcc1f7afbc3ea02fd3876
libslirp: 4.4.0
SLIRP_CONFIG_VERSION_MAX: 3
libseccomp: 2.5.0
swapFree: 0
swapTotal: 0
uptime: 23h 53m 25.03s (Approximately 0.96 days)
registries:
default-route-openshift-image-registry.apps.cfa.devcloud.intel.com:
Blocked: false
Insecure: true
Location: default-route-openshift-image-registry.apps.cfa.devcloud.intel.com
MirrorByDigestOnly: false
Mirrors: []
Prefix: default-route-openshift-image-registry.apps.cfa.devcloud.intel.com
quay.io:
Blocked: false
Insecure: true
Location: quay.io
MirrorByDigestOnly: false
Mirrors: []
Prefix: quay.io
search:

  • registry.fedoraproject.org
  • registry.access.redhat.com
  • registry.centos.org
  • docker.io
  • quay.io
    store:
    configFile: /home/podman/.config/containers/storage.conf
    containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
    graphDriverName: overlay
    graphOptions:
    overlay.mount_program:
    Executable: /usr/bin/fuse-overlayfs
    Package: fuse-overlayfs-1.5.0-1.fc34.x86_64
    Version: |-
    fusermount3 version: 3.10.4
    fuse-overlayfs: version 1.5
    FUSE library version 3.10.4
    using FUSE kernel interface version 7.31
    graphRoot: /home/podman/.local/share/containers/storage
    graphStatus:
    Backing Filesystem: overlayfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
    imageStore:
    number: 4
    runRoot: /run/user/1000/containers
    volumePath: /home/podman/.local/share/containers/storage/volumes
    version:
    APIVersion: 3.2.2
    Built: 1624664959
    BuiltTime: Fri Jun 25 23:49:19 2021
    GitCommit: ""
    GoVersion: go1.16.4
    OsArch: linux/amd64
    Version: 3.2.2

I created container using below Dockerfile.

FROM quay.io/podman/stable:latest

RUN yum install -y
python3-pip
python3 python3-wheel
git
java-11-openjdk.x86_64

RUN pip install jupyterlab

USER podman

WORKDIR /data

ENTRYPOINT ["jupyter", "lab", "--port=8888", "--no-browser", "--ip=0.0.0.0", "--allow-root"]


@flouthoc
Copy link
Collaborator

@sachinkaushik try a smaller range and also add entry for root by adding this to containerfile

RUN touch /etc/subgid /etc/subuid
&& chmod g=u /etc/subgid /etc/subuid /etc/passwd
&& echo root:165536:65536 > /etc/subuid
&& echo root:165536:65536 > /etc/subgid
&& echo containers:165536:65536 > /etc/subgid
&& echo containers:165536:65536 > /etc/subuid
&& echo podman:10000:5000 > /etc/subuid
&& echo podman:10000:5000 > /etc/subgid

@sachinkaushik
Copy link
Author

@flouthoc I tried after adding above lines in Dockerfile but still same error.


FROM quay.io/podman/stable:latest

RUN touch /etc/subgid /etc/subuid
&& chmod g=u /etc/subgid /etc/subuid /etc/passwd
&& echo root:165536:65536 > /etc/subuid
&& echo root:165536:65536 > /etc/subgid
&& echo containers:165536:65536 > /etc/subgid
&& echo containers:165536:65536 > /etc/subuid
&& echo podman:10000:5000 > /etc/subuid
&& echo podman:10000:5000 > /etc/subgid

RUN yum install -y
python3-pip
python3 python3-wheel
git
java-11-openjdk.x86_64

RUN pip install jupyterlab

USER podman

WORKDIR /data

ENTRYPOINT ["jupyter", "lab", "--port=8888", "--no-browser", "--ip=0.0.0.0", "--allow-root"]

podman info :

[podman@cliservice-874f8bb78-xhfds hello-world-python]$ podman info
host:
arch: amd64
buildahVersion: 1.21.0
cgroupControllers: []
cgroupManager: cgroupfs
cgroupVersion: v1
conmon:
package: conmon-2.0.27-2.fc34.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.0.27, commit: '
cpus: 12
distribution:
distribution: fedora
version: "34"
eventLogger: file
hostname: cliservice-874f8bb78-xhfds
idMappings:
gidmap:
- container_id: 0
host_id: 0
size: 4294967295
uidmap:
- container_id: 0
host_id: 0
size: 4294967295
kernel: 4.18.0-240.22.1.el8_3.x86_64
linkmode: dynamic
memFree: 50346074112
memTotal: 67230187520
ociRuntime:
name: crun
package: crun-0.20.1-1.fc34.x86_64
path: /usr/bin/crun
version: |-
crun version 0.20.1
commit: 0d42f1109fd73548f44b01b3e84d04a279e99d2e
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
remoteSocket:
path: /run/user/1000/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: false
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.1.9-1.fc34.x86_64
version: |-
slirp4netns version 1.1.8+dev
commit: 6dc0186e020232ae1a6fcc1f7afbc3ea02fd3876
libslirp: 4.4.0
SLIRP_CONFIG_VERSION_MAX: 3
libseccomp: 2.5.0
swapFree: 0
swapTotal: 0
uptime: 24h 36m 38.11s (Approximately 1.00 days)
registries:
default-route-openshift-image-registry.apps.cfa.devcloud.intel.com:
Blocked: false
Insecure: true
Location: default-route-openshift-image-registry.apps.cfa.devcloud.intel.com
MirrorByDigestOnly: false
Mirrors: []
Prefix: default-route-openshift-image-registry.apps.cfa.devcloud.intel.com
quay.io:
Blocked: false
Insecure: true
Location: quay.io
MirrorByDigestOnly: false
Mirrors: []
Prefix: quay.io
search:

  • registry.fedoraproject.org
  • registry.access.redhat.com
  • registry.centos.org
  • docker.io
  • quay.io
    store:
    configFile: /home/podman/.config/containers/storage.conf
    containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
    graphDriverName: overlay
    graphOptions:
    overlay.mount_program:
    Executable: /usr/bin/fuse-overlayfs
    Package: fuse-overlayfs-1.5.0-1.fc34.x86_64
    Version: |-
    fusermount3 version: 3.10.4
    fuse-overlayfs: version 1.5
    FUSE library version 3.10.4
    using FUSE kernel interface version 7.31
    graphRoot: /home/podman/.local/share/containers/storage
    graphStatus:
    Backing Filesystem: overlayfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
    imageStore:
    number: 4
    runRoot: /run/user/1000/containers
    volumePath: /home/podman/.local/share/containers/storage/volumes
    version:
    APIVersion: 3.2.2
    Built: 1624664959
    BuiltTime: Fri Jun 25 23:49:19 2021
    GitCommit: ""
    GoVersion: go1.16.4
    OsArch: linux/amd64
    Version: 3.2.2

@sachinkaushik
Copy link
Author

@flouthoc @umohnani8 any update on this?

@umohnani8
Copy link
Member

@sachinkaushik catching up on the conversation here, are you running your pod with privileged set to true or false? Based on the initial issue report, it says you were trying to run builds in an unprivileged pod, but in #10864 (comment) you are setting privileged to true, hence running the pod as privileged.

Can you please share your pod yaml? It will help me understand what scenario you are trying to run.

@sachinkaushik
Copy link
Author

sachinkaushik commented Jul 15, 2021

@umohnani8 Please find below attached POD and Deployment YAML file. I have also attached Containerfile/Dockerfile as well.

We want to run rootless container without privileged flag.

Deployment.YAML :

DEPLOYMENT.txt

Pod.YAML:

POD.txt

Dockerfile :

Dockerfile.txt

@sachinkaushik
Copy link
Author

Hi @flouthoc / @umohnani8 ,

Is there any way to run rootless container without below config in deployment yaml?

securityContext:
privileged: true

We don't want to run container with privileged flag. if we remove this from deployment yaml, we get below error.

Error: invalid configuration: the specified mapping 1000:1 in "/etc/subuid" includes the user UID

Thank you in advance.

@sachinkaushik
Copy link
Author

Hi @flouthoc , @umohnani8 , @rhatdan

Any update on this..?

@sachinkaushik
Copy link
Author

Hi @flouthoc / @umohnani8,

I tried removing privileged true from deployment yaml file and I also removed subuid and subgid from the containerfile, then there is below error.

Error: cannot setup namespace using newuidmap: exit status 1

----------------------------------Containerfile-----------------------------------------

FROM quay.io/podman/stable:latest

RUN echo "export isolation=chroot" >> /home/podman/.bashrc

COPY login-script.sh /etc/containers/
RUN chmod -R 777 /etc/containers/login-script.sh

USER podman

WORKDIR /data

ENTRYPOINT ["/etc/containers/login-script.sh"]
-------------------------------------------------------------Containerfile end--------------------

I removed below mapping from container file.

RUN touch /etc/subgid /etc/subuid
&& chmod g=u /etc/subgid /etc/subuid /etc/passwd
&& echo podman:1000:1 > /etc/subuid
&& echo podman:1000:5000 > /etc/subgid

I have already followed https://www.redhat.com/sysadmin/podman-inside-kubernetes 's Rootless Podman without the privileged flag article but nothing is working.

@flouthoc
Copy link
Collaborator

@sachinkaushik What is the error after these steps ? and does it works as it as soon as you add privileged: true. Last time we left here i guess #10864 (comment)

@sachinkaushik
Copy link
Author

Hi @flouthoc

We reviewed that implementation with Security team and as per them we can not give privileged true to container.
There is some security breach can happen.

After removing that privileged: true , we are getting below error, even thought if we use same id mapping in container file.

mount /proc to /proc: Operation not permitted

RUN touch /etc/subgid /etc/subuid
&& chmod g=u /etc/subgid /etc/subuid /etc/passwd
&& echo podman:1000:1 > /etc/subuid
&& echo podman:1000:5000 > /etc/subgid

@umohnani8
Copy link
Member

@sachinkaushik I looked at the pod yaml and Dockerfile you shared and tried out your use case on an OpenShift 4.8 cluster.

First I built a new image with the python libraries installed (I simplified it a bit):

FROM quay.io/podman/stable:latest
RUN yum install -y python3-pip python3 python3-wheel
RUN pip install jupyterlab
ENV PATH $PATH:/opt/gradle-3.3/bin

There shouldn't be a need to modify the /etc/subuid and /etc/subgid files, those should already exist with the correct ranges.

Then I created a rootless unprivileged pod with the image I built from the above Dockerfile:

apiVersion: v1
kind: Pod
metadata:
  name: no-priv-1
spec:
  containers:
    - name: no-priv
      image: quay.io/umohnani8/mypodman
      args:
        - sleep
        - "1000000"
      securityContext:
        runAsUser: 1000
      resources:
        limits:
          github.com/fuse: 1

I exec'd into the pod above and built a Dockerfile similar to what you shared:

FROM python:3-alpine
WORKDIR /service
RUN pip install flask
EXPOSE 8080

sh-5.1$ podman build --isolation chroot -t test .
STEP 1: FROM python:3-alpine
STEP 2: WORKDIR /service
--> d71fda62a86
STEP 3: RUN pip install flask
Collecting flask
  Downloading Flask-2.0.1-py3-none-any.whl (94 kB)
...
--> 4de146f216b
STEP 4: EXPOSE 8080
STEP 5: COMMIT test
--> cc489e4b05a
Successfully tagged localhost/test:latest
cc489e4b05ad3f13181f09bba745e42a21a90945fb3d7978694e9ebbf166da68

sh-5.1$ cat /etc/subuid
podman:10000:5000
sh-5.1$ cat /etc/subgid
podman:10000:5000

Can you please give me a little information on what platform you are running this on like OpenShift version and anything specific about any scc and policies being used? Can you also try something simple like what I shared above and see if that is working for you? I looked at the pod yaml you shared and nothing looks wrong with it. When using a rootless unprivileged container, you need to use --isolation chroot when doing builds as the kernel blocks a lot of build permissions required in the default isolation setting.

@giuseppe any idea what could be causing the /proc permission denied error? I am running a similar build within a podman container and I am not seeing that issue.

@sachinkaushik
Copy link
Author

sachinkaushik commented Aug 3, 2021

Hi @umohnani8 ,

I tried with your container image (quay.io/umohnani8/mypodman) that you have used and also added runAsUser: 1000 in deployment yaml file. But still same /proc error we are getting.

We are using Openshift 4.7.16 version.

We are using only scc that I have already shared with you. I have also added --isolation chroot in container file. so we dont need to pass it explicitly.

RUN echo "export isolation=chroot" >> /home/podman/.bashrc

@umohnani8
Copy link
Member

umohnani8 commented Aug 3, 2021

@sachinkaushik I think I know what your issue is, looks like you are setting selinux labels in your pod

securityContext:
    seLinuxOptions:
      level: 's0:c40,c35'
    fsGroup: 1001630000

You need to disable selinux for podman inside a k8s pod to work fine. Please remove the selinux label and try again. The two things that you need to run podman inside a container in unprivileged mode is to use the /dev/fuse device and disable selinux.

@sachinkaushik
Copy link
Author

@umohnani8

We have changed seLinuxContext and fsGroup type from mustRunAs to runAsAny. After making that change there is no seLinuxOptions in pod yaml file. Also we have checked seLinex is already disabled and we have also created fuse-device DeamonSet as well as mentioned in below link.

https://www.redhat.com/sysadmin/podman-inside-kubernetes(Rootless Podman without the privileged flag)

Attaching Pod and SCC yaml file.

Pod Yaml :

CLI-POD.txt

SCC Yaml :

privileged-scc.txt

Please find below screenshot from host machine.

seLinux

Please guide us here what else we need to debug here.

Thank you for response

@sachinkaushik
Copy link
Author

HI @umohnani8 / @flouthoc ,

Any update on this..?

@giuseppe
Copy link
Member

is there a /proc mount fully visible in the container?

The kernel doesn't allow to mount a new /proc file system in a user namespace unless there is already a fully visible /proc mounted.

Can you show the output of cat /proc/self/mountinfo?

@sachinkaushik
Copy link
Author

sachinkaushik commented Aug 18, 2021

@giuseppe Please fine below output of cat /proc/self/mountinfo. This output is when we have added privileged: true

33571 25829 0:2565 / / rw,relatime - overlay overlay rw,context="system_u:object_r:container_file_t:s0:c474,c971",lowerdir=/var/lib/containers/storage/overlay/l/4HFDQQUSKVBDJDYHIV5VDR76ND:/var/lib/containers/storage/overlay/l/WNM7J3ERZOVHKOSPZZEXRUX2L5:/var/lib/containers/storage/overlay/l/4C3MPPI6SMJBWYV5IORJK5PZDE:/var/lib/containers/storage/overlay/l/WFVHUYAFWCMYGCJTVC7RVZMFIJ:/var/lib/containers/storage/overlay/l/TT55GN6RYIL4DEFMYSN2UVCES4:/var/lib/containers/storage/overlay/l/U5MHN3OELOHMCD2MEMXFFXBQMM:/var/lib/containers/storage/overlay/l/I2XFSBQQC7SWRQIA7JB5RX22YK:/var/lib/containers/storage/overlay/l/CV7IOHBSUJQ2YKSW5FICJO4DRB:/var/lib/containers/storage/overlay/l/CRH7GKRPPHMFWJIE2SDYOSPP4O:/var/lib/containers/storage/overlay/l/WB2HS5WGPDZ3O7U5TVJ32KXUVD:/var/lib/containers/storage/overlay/l/SIEFPXEP7QSUP4SVTY77M3NLNK:/var/lib/containers/storage/overlay/l/64PYQNEZ6H6DMIAKHN5MY2OUMX:/var/lib/containers/storage/overlay/l/KPLFEVBXFSTZ77A7QPNA7XFREQ:/var/lib/containers/storage/overlay/l/ZVP5XBA4YQL6J6V2RPY5VOOSZS:/var/lib/containers/storage/overlay/l/HAYNKPRFEJAOTRIW35HX644ADL:/var/lib/containers/storage/overlay/l/TTFVIV3P5EA4OEQTH62IBFFOZX:/var/lib/containers/storage/overlay/l/CKTMNM3NOTSJX7ASDPYZHF4YOZ:/var/lib/containers/storage/overlay/l/TNA227WBQUZOH2WWP4SHIWMVX2:/var/lib/containers/storage/overlay/l/MI7A27HZOKCWRRDNUZJ4SDHHAX:/var/lib/containers/storage/overlay/l/U5Z36ICBD6DQ36HPVYFX4FOSMV:/var/lib/containers/storage/overlay/l/DYJEAVVWK3CAHDDSWETHAFCJJB:/var/lib/containers/storage/overlay/l/3ZCHJAZSPUU6DRAJHMRCXMNZNB:/var/lib/containers/storage/overlay/l/4ATHCTWKQVDBK7FPTBVX3ONUNM:/var/lib/containers/storage/overlay/l/DDML7PV4HWPYHRNGV37WP55D5C:/var/lib/containers/storage/overlay/l/U5CI32VJIZCCZO2XAPU32IHYS4:/var/lib/containers/storage/overlay/l/4FSVVT4DKEWLTTNFKNRTMJHWBF:/var/lib/containers/storage/overlay/l/SRJXPWUOCAZ7RJZOQ7RL6T3ELP:/var/lib/containers/storage/overlay/l/NMOWU6W4XMDOLGLRJ5JCUORECW:/var/lib/containers/storage/overlay/l/BKE3KVO56COK6LZEJSKK3Q7MJU,upperdir=/var/lib/containers/storage/overlay/8b715d787d68cfb1f281abb4b95905396f5a167d387ea28a108881327642f820/diff,workdir=/var/lib/containers/storage/overlay/8b715d787d68cfb1f281abb4b95905396f5a167d387ea28a108881327642f820/work
33572 33571 0:2850 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw
33573 33571 0:2851 / /dev rw,nosuid - tmpfs tmpfs rw,context="system_u:object_r:container_file_t:s0:c474,c971",size=65536k,mode=755
33574 33573 0:2854 / /dev/pts rw,nosuid,noexec,relatime - devpts devpts rw,context="system_u:object_r:container_file_t:s0:c474,c971",gid=5,mode=620,ptmxmode=666
33575 33573 0:2236 / /dev/mqueue rw,nosuid,nodev,noexec,relatime - mqueue mqueue rw,seclabel
33576 33571 0:2558 / /sys rw,nosuid,nodev,noexec,relatime - sysfs sysfs ro,seclabel
33577 33576 0:2855 / /sys/fs/cgroup rw,nosuid,nodev,noexec,relatime - tmpfs tmpfs rw,context="system_u:object_r:container_file_t:s0:c474,c971",mode=755
33578 33577 0:26 /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f8443c7_977a_4e62_b31c_d1f52d507136.slice/crio-8587d7cbea3f7bd3b57b9a7a722095e16e7d314f082b41c43001c3ff2b7df8fc.scope /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime master:8 - cgroup cgroup rw,seclabel,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd
33579 33577 0:30 /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f8443c7_977a_4e62_b31c_d1f52d507136.slice/crio-8587d7cbea3f7bd3b57b9a7a722095e16e7d314f082b41c43001c3ff2b7df8fc.scope /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime master:9 - cgroup cgroup rw,seclabel,freezer
33580 33577 0:31 /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f8443c7_977a_4e62_b31c_d1f52d507136.slice/crio-8587d7cbea3f7bd3b57b9a7a722095e16e7d314f082b41c43001c3ff2b7df8fc.scope /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime master:10 - cgroup cgroup rw,seclabel,net_cls,net_prio
33581 33577 0:32 /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f8443c7_977a_4e62_b31c_d1f52d507136.slice/crio-8587d7cbea3f7bd3b57b9a7a722095e16e7d314f082b41c43001c3ff2b7df8fc.scope /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime master:11 - cgroup cgroup rw,seclabel,blkio
33582 33577 0:33 /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f8443c7_977a_4e62_b31c_d1f52d507136.slice/crio-8587d7cbea3f7bd3b57b9a7a722095e16e7d314f082b41c43001c3ff2b7df8fc.scope /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime master:12 - cgroup cgroup rw,seclabel,devices
33583 33577 0:34 /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f8443c7_977a_4e62_b31c_d1f52d507136.slice/crio-8587d7cbea3f7bd3b57b9a7a722095e16e7d314f082b41c43001c3ff2b7df8fc.scope /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime master:13 - cgroup cgroup rw,seclabel,hugetlb
33584 33577 0:35 / /sys/fs/cgroup/rdma rw,nosuid,nodev,noexec,relatime master:14 - cgroup cgroup rw,seclabel,rdma
33585 33577 0:36 /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f8443c7_977a_4e62_b31c_d1f52d507136.slice/crio-8587d7cbea3f7bd3b57b9a7a722095e16e7d314f082b41c43001c3ff2b7df8fc.scope /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime master:15 - cgroup cgroup rw,seclabel,cpu,cpuacct
33586 33577 0:37 /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f8443c7_977a_4e62_b31c_d1f52d507136.slice/crio-8587d7cbea3f7bd3b57b9a7a722095e16e7d314f082b41c43001c3ff2b7df8fc.scope /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime master:16 - cgroup cgroup rw,seclabel,perf_event
33587 33577 0:38 /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f8443c7_977a_4e62_b31c_d1f52d507136.slice/crio-8587d7cbea3f7bd3b57b9a7a722095e16e7d314f082b41c43001c3ff2b7df8fc.scope /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime master:17 - cgroup cgroup rw,seclabel,cpuset
33588 33577 0:39 /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f8443c7_977a_4e62_b31c_d1f52d507136.slice/crio-8587d7cbea3f7bd3b57b9a7a722095e16e7d314f082b41c43001c3ff2b7df8fc.scope /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime master:18 - cgroup cgroup rw,seclabel,pids
33589 33577 0:40 /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f8443c7_977a_4e62_b31c_d1f52d507136.slice/crio-8587d7cbea3f7bd3b57b9a7a722095e16e7d314f082b41c43001c3ff2b7df8fc.scope /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime master:19 - cgroup cgroup rw,seclabel,memory
33590 33573 0:2227 / /dev/shm rw,nosuid,nodev,noexec,relatime master:5024 - tmpfs shm rw,context="system_u:object_r:container_file_t:s0:c474,c971",size=65536k
33591 33571 0:24 /containers/storage/overlay-containers/19a3e2aa91c835a1e3c902b0fc01705d100f6043d0243468f841a781d1d7079a/userdata/resolv.conf /etc/resolv.conf rw,nosuid,nodev,noexec master:29 - tmpfs tmpfs rw,seclabel,mode=755
33592 33571 0:24 /containers/storage/overlay-containers/19a3e2aa91c835a1e3c902b0fc01705d100f6043d0243468f841a781d1d7079a/userdata/hostname /etc/hostname rw,nosuid,nodev master:29 - tmpfs tmpfs rw,seclabel,mode=755
33593 33571 0:2222 / /data rw,relatime - ceph 172.30.5.86:6789,172.30.100.33:6789,172.30.197.35:6789:/volumes/csi/csi-vol-068b6001-fff3-11eb-b57e-0a580a850613/59d115ee-cbb1-4066-8593-ff5b2c3689d6 rw,seclabel,name=csi-cephfs-node,secret=,acl,mds_namespace=ocs-storagecluster-cephfilesystem
33594 33571 8:4 /ostree/deploy/rhcos/var/lib/kubelet/pods/4f8443c7-977a-4e62-b31c-d1f52d507136/etc-hosts /etc/hosts rw,relatime - xfs /dev/sda4 rw,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota
33595 33573 8:4 /ostree/deploy/rhcos/var/lib/kubelet/pods/4f8443c7-977a-4e62-b31c-d1f52d507136/containers/cliservice-devcloud-dev/554cbced /dev/termination-log rw,relatime - xfs /dev/sda4 rw,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota
33596 33571 0:24 /containers/storage/overlay-containers/8587d7cbea3f7bd3b57b9a7a722095e16e7d314f082b41c43001c3ff2b7df8fc/userdata/run/secrets /run/secrets rw,nosuid,nodev - tmpfs tmpfs rw,seclabel,mode=755
33597 33596 0:2162 / /run/secrets/kubernetes.io/serviceaccount ro,relatime - tmpfs tmpfs rw,seclabel

@giuseppe
Copy link
Member

thanks for sharing it! This is the configuration where you see the /proc mount error?

From what I can see the mount table looks fine.

@sachinkaushik
Copy link
Author

sachinkaushik commented Aug 18, 2021

Hi @giuseppe , Below is output of cat /proc/self/mountinfo , when we remove privileged: true from deployment yaml.

I just run podman build -t img . command now we are getting below error.

Error: invalid configuration: the specified mapping 1000:1 in "/etc/subuid" includes the user UID

33824 29520 0:3015 / / rw,relatime - overlay overlay rw,context="system_u:object_r:container_file_t:s0:c126,c1014",lowerdir=/var/lib/containers/storage/overlay/l/4HFDQQUSKVBDJDYHIV5VDR76ND:/var/lib/containers/storage/overlay/l/WNM7J3ERZOVHKOSPZZEXRUX2L5:/var/lib/containers/storage/overlay/l/4C3MPPI6SMJBWYV5IORJK5PZDE:/var/lib/containers/storage/overlay/l/WFVHUYAFWCMYGCJTVC7RVZMFIJ:/var/lib/containers/storage/overlay/l/TT55GN6RYIL4DEFMYSN2UVCES4:/var/lib/containers/storage/overlay/l/U5MHN3OELOHMCD2MEMXFFXBQMM:/var/lib/containers/storage/overlay/l/I2XFSBQQC7SWRQIA7JB5RX22YK:/var/lib/containers/storage/overlay/l/CV7IOHBSUJQ2YKSW5FICJO4DRB:/var/lib/containers/storage/overlay/l/CRH7GKRPPHMFWJIE2SDYOSPP4O:/var/lib/containers/storage/overlay/l/WB2HS5WGPDZ3O7U5TVJ32KXUVD:/var/lib/containers/storage/overlay/l/SIEFPXEP7QSUP4SVTY77M3NLNK:/var/lib/containers/storage/overlay/l/64PYQNEZ6H6DMIAKHN5MY2OUMX:/var/lib/containers/storage/overlay/l/KPLFEVBXFSTZ77A7QPNA7XFREQ:/var/lib/containers/storage/overlay/l/ZVP5XBA4YQL6J6V2RPY5VOOSZS:/var/lib/containers/storage/overlay/l/HAYNKPRFEJAOTRIW35HX644ADL:/var/lib/containers/storage/overlay/l/TTFVIV3P5EA4OEQTH62IBFFOZX:/var/lib/containers/storage/overlay/l/CKTMNM3NOTSJX7ASDPYZHF4YOZ:/var/lib/containers/storage/overlay/l/TNA227WBQUZOH2WWP4SHIWMVX2:/var/lib/containers/storage/overlay/l/MI7A27HZOKCWRRDNUZJ4SDHHAX:/var/lib/containers/storage/overlay/l/U5Z36ICBD6DQ36HPVYFX4FOSMV:/var/lib/containers/storage/overlay/l/DYJEAVVWK3CAHDDSWETHAFCJJB:/var/lib/containers/storage/overlay/l/3ZCHJAZSPUU6DRAJHMRCXMNZNB:/var/lib/containers/storage/overlay/l/4ATHCTWKQVDBK7FPTBVX3ONUNM:/var/lib/containers/storage/overlay/l/DDML7PV4HWPYHRNGV37WP55D5C:/var/lib/containers/storage/overlay/l/U5CI32VJIZCCZO2XAPU32IHYS4:/var/lib/containers/storage/overlay/l/4FSVVT4DKEWLTTNFKNRTMJHWBF:/var/lib/containers/storage/overlay/l/SRJXPWUOCAZ7RJZOQ7RL6T3ELP:/var/lib/containers/storage/overlay/l/NMOWU6W4XMDOLGLRJ5JCUORECW:/var/lib/containers/storage/overlay/l/BKE3KVO56COK6LZEJSKK3Q7MJU,upperdir=/var/lib/containers/storage/overlay/ab2b323a40fc413e9f8623f751e4402930da59aee723ef21d3138f584280c735/diff,workdir=/var/lib/containers/storage/overlay/ab2b323a40fc413e9f8623f751e4402930da59aee723ef21d3138f584280c735/work
33825 33824 0:3017 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw
33826 33824 0:3019 / /dev rw,nosuid - tmpfs tmpfs rw,context="system_u:object_r:container_file_t:s0:c126,c1014",size=65536k,mode=755
33827 33826 0:3020 / /dev/pts rw,nosuid,noexec,relatime - devpts devpts rw,context="system_u:object_r:container_file_t:s0:c126,c1014",gid=5,mode=620,ptmxmode=666
33829 33826 0:3008 / /dev/mqueue rw,nosuid,nodev,noexec,relatime - mqueue mqueue rw,seclabel
33839 33824 0:3014 / /sys ro,nosuid,nodev,noexec,relatime - sysfs sysfs ro,seclabel
33840 33839 0:3021 / /sys/fs/cgroup rw,nosuid,nodev,noexec,relatime - tmpfs tmpfs rw,context="system_u:object_r:container_file_t:s0:c126,c1014",mode=755
33842 33840 0:26 /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae881ec1_defb_43f1_8dbe_91233c6f4335.slice/crio-45adb799be8497065799171d3f702fdf7a33a03106fa42b63a1f0b448c3f7706.scope /sys/fs/cgroup/systemd ro,nosuid,nodev,noexec,relatime master:8 - cgroup cgroup rw,seclabel,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd
33843 33840 0:30 /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae881ec1_defb_43f1_8dbe_91233c6f4335.slice/crio-45adb799be8497065799171d3f702fdf7a33a03106fa42b63a1f0b448c3f7706.scope /sys/fs/cgroup/freezer ro,nosuid,nodev,noexec,relatime master:9 - cgroup cgroup rw,seclabel,freezer
33844 33840 0:31 /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae881ec1_defb_43f1_8dbe_91233c6f4335.slice/crio-45adb799be8497065799171d3f702fdf7a33a03106fa42b63a1f0b448c3f7706.scope /sys/fs/cgroup/net_cls,net_prio ro,nosuid,nodev,noexec,relatime master:10 - cgroup cgroup rw,seclabel,net_cls,net_prio
33845 33840 0:32 /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae881ec1_defb_43f1_8dbe_91233c6f4335.slice/crio-45adb799be8497065799171d3f702fdf7a33a03106fa42b63a1f0b448c3f7706.scope /sys/fs/cgroup/blkio ro,nosuid,nodev,noexec,relatime master:11 - cgroup cgroup rw,seclabel,blkio
33846 33840 0:33 /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae881ec1_defb_43f1_8dbe_91233c6f4335.slice/crio-45adb799be8497065799171d3f702fdf7a33a03106fa42b63a1f0b448c3f7706.scope /sys/fs/cgroup/devices ro,nosuid,nodev,noexec,relatime master:12 - cgroup cgroup rw,seclabel,devices
33847 33840 0:34 /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae881ec1_defb_43f1_8dbe_91233c6f4335.slice/crio-45adb799be8497065799171d3f702fdf7a33a03106fa42b63a1f0b448c3f7706.scope /sys/fs/cgroup/hugetlb ro,nosuid,nodev,noexec,relatime master:13 - cgroup cgroup rw,seclabel,hugetlb
33848 33840 0:35 / /sys/fs/cgroup/rdma ro,nosuid,nodev,noexec,relatime master:14 - cgroup cgroup rw,seclabel,rdma
33849 33840 0:36 /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae881ec1_defb_43f1_8dbe_91233c6f4335.slice/crio-45adb799be8497065799171d3f702fdf7a33a03106fa42b63a1f0b448c3f7706.scope /sys/fs/cgroup/cpu,cpuacct ro,nosuid,nodev,noexec,relatime master:15 - cgroup cgroup rw,seclabel,cpu,cpuacct
33850 33840 0:37 /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae881ec1_defb_43f1_8dbe_91233c6f4335.slice/crio-45adb799be8497065799171d3f702fdf7a33a03106fa42b63a1f0b448c3f7706.scope /sys/fs/cgroup/perf_event ro,nosuid,nodev,noexec,relatime master:16 - cgroup cgroup rw,seclabel,perf_event
33851 33840 0:38 /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae881ec1_defb_43f1_8dbe_91233c6f4335.slice/crio-45adb799be8497065799171d3f702fdf7a33a03106fa42b63a1f0b448c3f7706.scope /sys/fs/cgroup/cpuset ro,nosuid,nodev,noexec,relatime master:17 - cgroup cgroup rw,seclabel,cpuset
33852 33840 0:39 /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae881ec1_defb_43f1_8dbe_91233c6f4335.slice/crio-45adb799be8497065799171d3f702fdf7a33a03106fa42b63a1f0b448c3f7706.scope /sys/fs/cgroup/pids ro,nosuid,nodev,noexec,relatime master:18 - cgroup cgroup rw,seclabel,pids
33853 33840 0:40 /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae881ec1_defb_43f1_8dbe_91233c6f4335.slice/crio-45adb799be8497065799171d3f702fdf7a33a03106fa42b63a1f0b448c3f7706.scope /sys/fs/cgroup/memory ro,nosuid,nodev,noexec,relatime master:19 - cgroup cgroup rw,seclabel,memory
33854 33826 0:3007 / /dev/shm rw,nosuid,nodev,noexec,relatime master:7824 - tmpfs shm rw,context="system_u:object_r:container_file_t:s0:c126,c1014",size=65536k
33855 33824 0:24 /containers/storage/overlay-containers/2615507510fac63dc12657cc9f749dd5e0ed1e226b8ebe70810b93d3e020f1be/userdata/resolv.conf /etc/resolv.conf rw,nosuid,nodev,noexec master:29 - tmpfs tmpfs rw,seclabel,mode=755
33856 33824 0:24 /containers/storage/overlay-containers/2615507510fac63dc12657cc9f749dd5e0ed1e226b8ebe70810b93d3e020f1be/userdata/hostname /etc/hostname rw,nosuid,nodev master:29 - tmpfs tmpfs rw,seclabel,mode=755
33857 33824 0:2222 / /data rw,relatime - ceph 172.30.5.86:6789,172.30.100.33:6789,172.30.197.35:6789:/volumes/csi/csi-vol-068b6001-fff3-11eb-b57e-0a580a850613/59d115ee-cbb1-4066-8593-ff5b2c3689d6 rw,seclabel,name=csi-cephfs-node,secret=,acl,mds_namespace=ocs-storagecluster-cephfilesystem
33858 33824 8:4 /ostree/deploy/rhcos/var/lib/kubelet/pods/ae881ec1-defb-43f1-8dbe-91233c6f4335/etc-hosts /etc/hosts rw,relatime - xfs /dev/sda4 rw,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota
33859 33826 8:4 /ostree/deploy/rhcos/var/lib/kubelet/pods/ae881ec1-defb-43f1-8dbe-91233c6f4335/containers/cliservice-devcloud-dev/535c42b8 /dev/termination-log rw,relatime - xfs /dev/sda4 rw,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota
33860 33824 0:24 /containers/storage/overlay-containers/45adb799be8497065799171d3f702fdf7a33a03106fa42b63a1f0b448c3f7706/userdata/run/secrets /run/secrets rw,nosuid,nodev - tmpfs tmpfs rw,seclabel,mode=755
33861 33860 0:3006 / /run/secrets/kubernetes.io/serviceaccount ro,relatime - tmpfs tmpfs rw,seclabel
29521 33825 0:3017 /bus /proc/bus ro,relatime - proc proc rw
29522 33825 0:3017 /fs /proc/fs ro,relatime - proc proc rw
29817 33825 0:3017 /irq /proc/irq ro,relatime - proc proc rw
29818 33825 0:3017 /sys /proc/sys ro,relatime - proc proc rw
29819 33825 0:3017 /sysrq-trigger /proc/sysrq-trigger ro,relatime - proc proc rw
29820 33825 0:3022 / /proc/acpi ro,relatime - tmpfs tmpfs ro,context="system_u:object_r:container_file_t:s0:c126,c1014"
29821 33825 0:3019 /null /proc/kcore rw,nosuid - tmpfs tmpfs rw,context="system_u:object_r:container_file_t:s0:c126,c1014",size=65536k,mode=755
29822 33825 0:3019 /null /proc/keys rw,nosuid - tmpfs tmpfs rw,context="system_u:object_r:container_file_t:s0:c126,c1014",size=65536k,mode=755
29823 33825 0:3019 /null /proc/timer_list rw,nosuid - tmpfs tmpfs rw,context="system_u:object_r:container_file_t:s0:c126,c1014",size=65536k,mode=755
29824 33825 0:3019 /null /proc/sched_debug rw,nosuid - tmpfs tmpfs rw,context="system_u:object_r:container_file_t:s0:c126,c1014",size=65536k,mode=755
29825 33825 0:3023 / /proc/scsi ro,relatime - tmpfs tmpfs ro,context="system_u:object_r:container_file_t:s0:c126,c1014"
29826 33839 0:3024 / /sys/firmware ro,relatime - tmpfs tmpfs ro,context="system_u:object_r:container_file_t:s0:c126,c1014"

@giuseppe
Copy link
Member

there are a bunch of mounts that cover /proc:

29521 33825 0:3017 /bus /proc/bus ro,relatime - proc proc rw
29522 33825 0:3017 /fs /proc/fs ro,relatime - proc proc rw
29817 33825 0:3017 /irq /proc/irq ro,relatime - proc proc rw
29818 33825 0:3017 /sys /proc/sys ro,relatime - proc proc rw
29820 33825 0:3022 / /proc/acpi ro,relatime - tmpfs tmpfs ro,context="system_u:object_r:container_file_t:s0:c126,c1014"
29821 33825 0:3019 /null /proc/kcore rw,nosuid - tmpfs tmpfs rw,context="system_u:object_r:container_file_t:s0:c126,c1014",size=65536k,mode=755
29822 33825 0:3019 /null /proc/keys rw,nosuid - tmpfs tmpfs rw,context="system_u:object_r:container_file_t:s0:c126,c1014",size=65536k,mode=755
29823 33825 0:3019 /null /proc/timer_list rw,nosuid - tmpfs tmpfs rw,context="system_u:object_r:container_file_t:s0:c126,c1014",size=65536k,mode=755
29824 33825 0:3019 /null /proc/sched_debug rw,nosuid - tmpfs tmpfs rw,context="system_u:object_r:container_file_t:s0:c126,c1014",size=65536k,mode=755
29825 33825 0:3023 / /proc/scsi ro,relatime - tmpfs tmpfs ro,context="system_u:object_r:container_file_t:s0:c126,c1014"

Even if you manage to solve the newuidmap/newgidmap problem and avoid the UID is part of the subuids, you won't still be able to run a container with a separate pid namespace because the kernel won't allow it. There must be a fully visible procfs mount (i.e. there are no other mounts coverting part of it) to be able to mount a new procfs from a user namespace.

@giuseppe
Copy link
Member

giuseppe commented Aug 18, 2021

in other words, you need to run the pod privileged.

Or if you use CRI-O, use the CRI-O annotations to create a user namespace: https://www.redhat.com/sysadmin/podman-inside-kubernetes (paragraph Podman in a locked-down container using user namespaces in Kubernetes)

@sachinkaushik
Copy link
Author

@giuseppe we dont want to run container from cli, we just want to create container image that is our requirement. we have cto push that container image to OCR registry but we dont allow user to run that container image.

@giuseppe
Copy link
Member

aren't you trying to build the image from a Kubernetes pod?

@sachinkaushik
Copy link
Author

sachinkaushik commented Aug 18, 2021

@giuseppe we have below containerfile and using that we are creating a container image and we are creating deployment using that in openshift. So for this deployment we have a pod running in openshift, and we have expose a route for the service and accessing jupyterlab.

FROM quay.io/podman/stable:latest

RUN touch /etc/subgid /etc/subuid
&& chmod g=u /etc/subgid /etc/subuid /etc/passwd
&& echo podman:1000:1 > /etc/subuid
&& echo podman:1000:5000 > /etc/subgid

RUN echo "export isolation=chroot" >> /home/podman/.bashrc

RUN yum install -y
python3-pip
python3 python3-wheel
git
java-11-openjdk.x86_64

RUN pip install jupyterlab

USER podman
WORKDIR /home/podman/

ENTRYPOINT [jupyter lab --port=8888 --no-browser --ip=0.0.0.0 --allow-root]

@giuseppe
Copy link
Member

You still need to be able to run a container in order to deal with the RUN statement in the Containerfile.

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@giuseppe
Copy link
Member

I am closing this issue since there was no feedback for more than a month. Please reopen if you've more comments

@NewtonTrendy
Copy link

NewtonTrendy commented Aug 17, 2023

Just wanted to point out the error with path does not exist is consistient with using commandline syntax for volumes in the dockerfile ie VOLUME /path:/path instead of VOLUME /path /path

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Nov 16, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 16, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. podman-in-container stale-issue
Projects
None yet
Development

No branches or pull requests

7 participants