Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minio Client startup fails with permission error on OpenShift #2640

Closed
juv opened this issue Jan 6, 2019 · 14 comments
Closed

Minio Client startup fails with permission error on OpenShift #2640

juv opened this issue Jan 6, 2019 · 14 comments

Comments

@juv
Copy link

juv commented Jan 6, 2019

Expected behavior

Minio Client starts without errors on OpenShift

Actual behavior

Minio Client starts up and fails immediately with error mc: <ERROR> Unable to save new mc config. mkdir /.mc: permission denied.. This is related to helm/charts#4128 -- OpenShift expects containers to not run as root.

Steps to reproduce the behavior

mc version

Unknown (mc fails before mc version command can be executed)

System information

OpenShift:

openshift v3.10.34
kubernetes v1.10.0+b81c8f8

Tried with mc docker tags: latest (minio/mc@sha256:7b27ff9a0b9bbc0622fe78b086ddf3a36fe50f8673d906de1f336ed3a5e249d9) and edge (minio/mc@sha256:091bce3f6f240c731e2c4f203f8a87905677a727cb5d09d04c73be92ab5788b6)

@kannappanr
Copy link
Collaborator

@juv Thanks for filing this issue. We will try to resolve this soon.

@sinhaashish
Copy link
Contributor

@juv can you provide the yaml file which you are using.

@juv
Copy link
Author

juv commented Feb 13, 2019

@sinhaashish I am simply deploying the mc docker image in Openshift.

kind: Pod
metadata:
  annotations:
    openshift.io/deployment-config.latest-version: '2'
    openshift.io/deployment-config.name: mc
    openshift.io/deployment.name: mc-2
    openshift.io/generated-by: OpenShiftWebConsole
    openshift.io/scc: restricted
  creationTimestamp: '2019-02-13T11:25:56Z'
  generateName: mc-2-
  labels:
    app: mc
    deployment: mc-2
    deploymentconfig: mc
  name: mc-2-qmgrc
  namespace: my-namespace
  ownerReferences:
    - apiVersion: v1
      blockOwnerDeletion: true
      controller: true
      kind: ReplicationController
      name: mc-2
      uid: 1dffba70-2f82-11e9-9ecc-001a4a160156
  resourceVersion: '44349118'
  selfLink: /api/v1/namespaces/my-namespace/pods/mc-2-qmgrc
  uid: 21bdff70-2f82-11e9-9ecc-001a4a160156
spec:
  containers:
    - image: >-
        minio/mc@sha256:1bda261836dc1d5cb4b9ffde780b18158f5569ebcfe32e3057d0e7f33035acef
      imagePullPolicy: Always
      name: mc
      resources: {}
      securityContext:
        capabilities:
          drop:
            - KILL
            - MKNOD
            - SETGID
            - SETUID
        runAsUser: 1012320000
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
        - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
          name: default-token-264g6
          readOnly: true
  dnsPolicy: ClusterFirst
  imagePullSecrets:
    - name: default-dockercfg-n6x4g
  nodeName: fs012.k8s
  nodeSelector:
    node-role.kubernetes.io/compute: 'true'
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext:
    fsGroup: 1012320000
    seLinuxOptions:
      level: 's0:c111,c55'
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  volumes:
    - name: default-token-264g6
      secret:
        defaultMode: 420
        secretName: default-token-264g6

Note that the Dockerfile should switch the user context, this is probably the root cause. See here:
https://docs.openshift.com/container-platform/3.10/creating_images/guidelines.html#openshift-specific-guidelines in the section "Support Arbitrary User IDs".

The current Dockerfile accesses the path /.mc which is owned by root user, not the root group. The user that is assigned to the pod (automatically done by Openshift) has an ID > 1001, thus can't access the path /.mc. In the link above it is explained how you can chmod the folder permissions so that the root group (not the root user) can access the data. Note the difference between root user and root group. The user that is assigned to the pod is member of the root group.

@juv
Copy link
Author

juv commented Mar 22, 2019

any update?

@sinhaashish
Copy link
Contributor

any update?

PR 7569 can fix the problem.

@Exordian
Copy link

Exordian commented Jun 11, 2019

unfortunately it make things worse, the minio server broke as well.

OpenShift spawns the process with a random user id (which is not present in the passwd file at this time). you can either add the passwd entry there, which is described as best practice when building openshift containers [1], or just ignore the fact, that you're running as some other user (which was fine up to now, since openshift automatically does provision the data volume with the same userid, as the process gets started. yet, minio tries to 'fix' the arbitrary user thing by doing su-exec which requires more permisisons to switch (the PR assumes root most likely). OpenShift now fails to start the container with setgroups(0): Operation not permitted since the syscall for this is not allowed. with the latest build, minio will only run with a custom docker build on openshift (either adding the passwd file entry or removing the last PR). to mimic the behaviour, one could use docker run --user 12345 (where 12345 is an arbitrary valid unix user id) which should do the same thing as openshift does

[1] https://docs.openshift.com/container-platform/3.9/creating_images/guidelines.html#openshift-specific-guidelines (Support Arbitrary User IDs)

@harshavardhana
Copy link
Member

harshavardhana commented Jun 11, 2019

We can fix this by adding password entry @Exordian would you be able to send a fix?

@CermakM
Copy link

CermakM commented Aug 2, 2019

Stumbled upon this problem as well on OpenShift 3.11 when installing via

helm install stable/minio \
  --name argo-artifacts \
  --set service.type=LoadBalancer \
  --set defaultBucket.enabled=true \
  --set defaultBucket.name=my-bucket \
  --set persistence.enabled=false \
  --set fullnameOverride=argo-artifacts

... any progress on this?

@IlonkaO
Copy link

IlonkaO commented Oct 3, 2019

Hi guys... Du you plan a solution in next time???
I stumbled also upon this problem and it would be great to use the whole helm chart including the job to create the bucket after installation.
I'm using a OpenShift Cluster 3.11 with Kubernetes 1.11
Would be great to hear from you...

@Jaydee94
Copy link

Jaydee94 commented Feb 14, 2020

Hi guys, i have mentioned a fix for OpenShift-Support in another Issue for the official Minio-Helm-Chart.
helm/charts#20757

@harshavardhana
Copy link
Member

Fixed by helm/charts#20766

@ryandawsonuk
Copy link

ryandawsonuk commented Jan 15, 2021

I'm still seeing this issue after setting the directory using the helm chart and so are other users. I think there's been a confusion here. The PR helm/charts#20766 changes the helm chart but that's the server side. This issue is about how the client runs in a Pod. The PR #2734 does look like it would've addressed the client issue but it was closed with the comment "Closing this PR as stale and not needed. Will be fixed in a separate PR."

@devtools-teamcity
Copy link

This issue still persists.

@9x3l6
Copy link

9x3l6 commented Dec 20, 2023

This was prematurely closed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests