Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Secrets with specific permissions (defaultMode or mode) not being applied in Kubernetes 1.4.0 #34982

Closed
ryan-loanpal opened this issue Oct 17, 2016 · 33 comments

Comments

@ryan-loanpal
Copy link

commented Oct 17, 2016

BUG REPORT

Kubernetes version:
client version: 1.4.0
server version 1.4.0

Environment:

  • Cloud Provider: GCP/GKE (Google Container Engine)
  • OS in container: CentOS 7
  • Kernel in container: 3.16.7-ckt25-2

What happened:
Attempting to use the "defaultMode" or "Mode" permissions option for Secrets recently added in Kubernetes 1.4.0. does not actually apply requested permissions on files in mount point. Followed documentation here:
http://kubernetes.io/docs/user-guide/secrets/#

I've tried it two ways:

  • First, define the secret:
{
        "kind": "Secret",
        "apiVersion": "v1",
        "metadata": {
            "name": "db-secret",
            "namespace": "default"
        },
        "data": {
            "dbcert": "<base64 encoded - snip>",
            "dbkey": "<base64 encoded - snip>",
            "dbchain": "<base64 encoded - snip>",
            "dhparam": "<base64 encoded - snip>"
        }
}
  • Then, call the secret to be mounted either with defaultMode for entire mount, or with Mode for each 'file':
  1. Permissions specified per file:
        "volumes": [{
            "name": "secrets",
            "secret": {
                "secretName": "db-secret",
                "items": [{
                    "key": "dbcert",
                    "path": "dbcert",
                    "mode": 420
                },{
                    "key": "dbkey",
                    "path": "dbkey",
                    "mode": 256
                },{
                    "key": "dbchain",
                    "path": "dbchain",
                    "mode": 420
                }]
            }
        }]
  1. Permissions specified for entire mounted db-secret volume:
            "volumes": [
<snip other volumes>
            {
                "name": "secrets",
                "secret": {
                    "secretName": "db-secret",
                    "defaultMode": 256
                }
            }
            ]
  • Results:
    Either way, the files remain with the default Secrets permissions of 644:
bash-4.2$ pwd
/etc/secrets/..data
bash-4.2$ ls -la
total 12
drwxr-xr-x 2 root root  100 Oct 17 20:48 .
drwxrwxrwt 3 root root  140 Oct 17 20:48 ..
-rw-r--r-- 1 root root 2533 Oct 17 20:48 dbcert
-rw-r--r-- 1 root root 2106 Oct 17 20:48 dbchain
-rw-r--r-- 1 root root 1708 Oct 17 20:48 dbkey

I see nothing in release notes for any Kubernetes version post-1.4.0 that would indicate a discovered bug that's been fixed. I haven't been able to find any filed bugs about this. Am I doing something wrong? I've read the docs over and over, it seems pretty simple.

What you expected to happen:
When the container starts, the files in the Secrets (db-secret) volume mount should either all be chmod 400 (when using defaultMode with Decimal 256), or at least the dbkey file should be chmod 400 (when using Mode per secret value with Decimal 256).

How to reproduce it
Create a secret bundle, upload to the GKE cluster, then define a Pod to mount that secret as a volume with defaultMode or Mode options specified to chmod the secret files to a more restrictive ACL.

Anything else do we need to know:
I originally attempted to use this with client/server 1.3.7, not realizing the feature(s) were added in 1.4.0, and was getting a deployment error that defaultMode was an invalid option. Found that it was added in 1.4.0, upgraded my GKE cluster and my local client , and re-deployed. No errors now, but the permissions requested aren't actually applied.

@ryan-loanpal

This comment has been minimized.

Copy link
Author

commented Oct 20, 2016

Update:
I have different GCP projects for different environments - dev, stage, prod. I was doing all of this testing in the dev project, where my GKE clusters had started at 1.3.7 and were upgraded to 1.4.0.

I just tried to repro my issue in my stage environment, where my GKE clusters were created later and started at 1.4.0. Guess what? The mode and defaultMode feature worked fine. Identical configs.

So, this is an upgrade issue with GKE on GCP. If you have a cluster that was upgraded (from at least 1.3.7) to 1.4.0, the new mode and defaultMode features for mounting secrets volumes won't apply, but there will be no error or log entry that I can find about why.

Still a bug, but clearly an upgrade-related bug, not a fresh deployment bug. As a reminder, the upgrade process for a GKE cluster is point-and-click in the GCP console - you just tell it to upgrade your cluster from 1.3.7 to 1.4.0 - so I don't know what happens behind the scenes for the upgrade, so I can't provide any diagnostic info that I know of.

@bgrant0607

This comment has been minimized.

Copy link
Member

commented Nov 17, 2016

cc @pmorie

@stp-ip

This comment has been minimized.

Copy link
Member

commented Feb 7, 2017

@ryan-loanpal Do you have any issues with the recent 1.5.2 and 1.6.0-alpha1 releases?
Does defaultMode and Mode on items work for you?

@wolfadactyl

This comment has been minimized.

Copy link

commented Mar 23, 2017

I have just run into this issue in the past few days. Here is an example that should repro:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    run: perms-ex
  name: perms-ex
spec:
  replicas: 1
  selector:
    matchLabels:
      run: perms-ex
  template:
    metadata:
      labels:
        run: perms-ex
    spec:
      containers:
      - command:
        - bash
        image: ubuntu
        name: perms-ex
        stdin: true
        tty: true
        volumeMounts:
        - mountPath: /etc/perms_ex
          name: perms-ex-secret
          readOnly: true
      volumes:
      - name: perms-ex-secret
        secret:
          defaultMode: 400
          secretName: perms-ex-secret
---
apiVersion: v1
data:
  some_secret: supersecretthing
kind: Secret
metadata:
  name: perms-ex-secret
type: Opaque

Opening a bash terminal and checking permissions:

/etc/perms_ex/..data

total 4
drwxr-xr-x 2 root root  60 Mar 23 17:30 .
drwxrwxrwt 3 root root 100 Mar 23 17:30 ..
-rw--w---- 1 root root  12 Mar 23 17:30 some_secret

Client version: 1.5.3
Server version: 1.5.3

@wolfadactyl

This comment has been minimized.

Copy link

commented Mar 23, 2017

Oh, also I've tried this with setting mode on my deployments (also tried using with defaultMode) and neither have seemed to have any impact.

@stp-ip

@HerrmannHinz

This comment has been minimized.

Copy link

commented Apr 5, 2017

just tried this too. seems not to take effect:
from a traefik.yml:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: traefik-ingress-controller
  labels:
    k8s-app: traefik-ingress-lb
spec:
  replicas: 1
  revisionHistoryLimit: 0
  template:
    metadata:
      labels:
        k8s-app: traefik-ingress-lb
        name: traefik-ingress-lb
    spec:
      containers:
        - name: traefik-ingress-lb
          image: "traefik:1.2.0-alpine"
          imagePullPolicy: Always
          volumeMounts:
            - mountPath: "/config"
              name: "config"
            - mountPath: "/acme/acme.json"
              name: "traefik-acme-json"
          ports:
            - containerPort: 80
            - containerPort: 443
            - containerPort: 8080
          args:
            - --configfile=/config/traefik.toml
      volumes:
        - name: config
          configMap:
            name: traefik-conf
        - name: acme
          secret:
            defaultMode: 600
            secretName: traefik-acme-json
---
apiVersion: v1
kind: Secret
metadata:
  name: traefik-acme-json
type: Opaque            

now when i run the pod:

> kubectl logs traefik-ingress-controller-3312106953-1f52j
time="2017-04-05T10:00:02Z" level=info msg="Traefik version v1.2.0 built on 2017-03-21_09:50:01AM"
time="2017-04-05T10:00:02Z" level=info msg="Using TOML configuration file /config/traefik.toml"
time="2017-04-05T10:00:02Z" level=info msg="Preparing server http &{Network: Address::80 TLS:<nil> Redirect:0xc4202e0cc0 Auth:0xc4207aff40 Compress:false}"
time="2017-04-05T10:00:02Z" level=info msg="Preparing server https &{Network: Address::443 TLS:0xc420356c60 Redirect:<nil> Auth:<nil> Compress:false}"
time="2017-04-05T10:00:02Z" level=info msg="Starting server on :80"
time="2017-04-05T10:00:03Z" level=info msg="Loading ACME Account..."
time="2017-04-05T10:00:03Z" level=error msg="Error creating TLS config: permissions 777 for /acme/acme.json are too open, please use 600"
time="2017-04-05T10:00:03Z" level=fatal msg="Error preparing server: permissions 777 for /acme/acme.json are too open, please use 600"

running on kubernetes 1.5.4

any idea? documentation is not very informative about this either.

@paultiplady

This comment has been minimized.

Copy link

commented Apr 5, 2017

@pmorie is this in your wheelhouse? We seem to be consistently getting invalid permission masks on secret volumes in 1.5.4. Have a clean repro for you here.

This makes any container that runs an ssh server fail out of the box, since openssh polices permissions on its keys by default.

@rainder

This comment has been minimized.

Copy link

commented Apr 11, 2017

same issue. can't get mongodb running on GKE when using --keyFile arg. k8s version 1.6

statefulset config

...
          volumeMounts:
            - mountPath: /data/db
              name: storage
            - mountPath: /mongodb-keyfile
              name: mongodb-keyfile
...
      volumes:
        - name: mongodb-keyfile
          secret:
            defaultMode: 384
            items:
              - key: mongodb-keyfile
                mode: 384
                path: mongodb-keyfile
            secretName: mongodb
$ k --namespace=mongodb exec -it mongodb-0 -- ls -alih /mongodb-keyfile
total 4.0K
249964 drwxrwxrwt 3 root root  100 Apr 11 08:57 .
251508 drwxr-xr-x 1 root root 4.0K Apr 11 08:57 ..
249541 drwxr-xr-x 2 root root   60 Apr 11 08:57 ..4984_11_04_08_57_12.775473917
249544 lrwxrwxrwx 1 root root   31 Apr 11 08:57 ..data -> ..4984_11_04_08_57_12.775473917
249543 lrwxrwxrwx 1 root root   22 Apr 11 08:57 mongodb-keyfile -> ..data/mongodb-keyfile
@joshperry

This comment has been minimized.

Copy link

commented Apr 14, 2017

Same here on a 1.6 GKE cluster upgraded from ~1.3 over the last year or so. Makes it inconvenient to even use an ssh client from a container. Using defaultMode does do something, without it the file perms are set to the default of 644.

We're working around this by copying and chmoding the key in the local app directory from the secret mount at startup.

volumes:
  - name: bolt-secrets
    secret:
      secretName: bolt-secrets        
      defaultMode: 400  
root@bolt-1390281348-9nck4:/usr/src/app# ls -la /var/secure/..data/
total 8
drwxr-xr-x 2 root root  80 Apr 14 13:37 .
drwxrwxrwt 3 root root 120 Apr 14 13:37 ..
-rw--w---- 1 root root 365 Apr 14 13:37 host.key
-rw--w---- 1 root root 411 Apr 14 13:37 mgmtid
@schasse

This comment has been minimized.

Copy link

commented Apr 19, 2017

We're using a workaround as well:

containers:
- name: sftp
  image: atmoz/sftp:debian-jessie
  lifecycle:
    postStart:
      exec:
        command:
        - bash
        - -c
        - chmod 400 /etc/ssh/ssh_host_rsa_key /etc/ssh/ssh_host_ed25519_key
  volumeMounts:
  - name: keys
    mountPath: /etc/ssh/ssh_host_rsa_key
    subPath: ssh_host_rsa_key
  - name: keys
    mountPath: /etc/ssh/ssh_host_ed25519_key
    subPath: ssh_host_ed25519_key
volumes:
- name: keys
  secret:
    secretName: ssh-keys
@thockin

This comment has been minimized.

Copy link
Member

commented Apr 23, 2017

@kubernetes/sig-storage-bugs because volume

@kubernetes/sig-node-bugs because kubelet

This is probably straight-forward, just needs someone to look at it.

@leopoldodonnell

This comment has been minimized.

Copy link

commented Apr 29, 2017

In my case the lifecycle approach ran into race condition issues. If it doesn't work for you, consider using an init container. This solved my problem with a secret git-secret and key ssh

     # FIXME: Remove this when defaultMode is working
     initContainers:
      - name: fix-perms
        image: busybox
        command:
        - sh
        - -c
        - /bin/chmod 400 /etc/git-secret/ssh
        volumeMounts:
        - name: git-secret
          mountPath: /etc/git-secret/ssh
          subPath: ssh
        securityContext:
          runAsUser: 0
      volumes:
      - name: git-secret
         secret:
           secretName: the-secret-name
           defaultMode: 0400
@dankirkpatrickmp2

This comment has been minimized.

Copy link

commented Jun 16, 2017

Joshperry--you're getting the files mounted with octal permissions "0620"...0620 is 400 in decimal. The mode/defaultMode field is treating "400" as a decimal number...and you're getting the expected behavior for that.

@joshperry

This comment has been minimized.

Copy link

commented Jul 7, 2017

Wow, @dankirkpatrickmp2. Thank you much, we flubbed that one alright.

@joshperry

This comment has been minimized.

Copy link

commented Jul 7, 2017

I've changed my config to use octal 0400 (also tried 256 decimal) and it is now working properly. So perhaps not an upgrade issue, at least for me, from ~1.3.

@fejta-bot

This comment has been minimized.

Copy link

commented Dec 31, 2017

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@thockin

This comment has been minimized.

Copy link
Member

commented Jan 2, 2018

This appears fixed (by my repro test)

@thockin thockin closed this Jan 2, 2018

@richmondwang

This comment has been minimized.

Copy link

commented Feb 22, 2018

I am having this issue now.

Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T12:22:21Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.2-gke.1", GitCommit:"4ce7af72d8d343ea2f7680348852db641ff573af", GitTreeState:"clean", BuildDate:"2018-01-31T22:30:55Z", GoVersion:"go1.9.2b4", Compiler:"gc", Platform:"linux/amd64"}
@thockin

This comment has been minimized.

Copy link
Member

commented Feb 27, 2018

@MichaelScript

This comment has been minimized.

Copy link

commented Apr 15, 2018

@richmondwang @leopoldodonnell @schasse @rainder @thockin There was no bug It seems like it was an issue with how the key was being generated by the post everyone is referencing.

https://github.com/MichaelScript/kubernetes-mongodb

If you run the example I created and kubectl logs mongod-0 you can see that the file now has the correct permissions and is mounted correctly.

@MichaelScript

This comment has been minimized.

Copy link

commented Apr 15, 2018

@thockin I think there might be an issue where if the secret has no data then it mounts it anyways (not sure if this is the specified behavior but it might be okay)

If this is the specified behavior then there is a bug where the file permissions aren't modified of the mounted file that has no data. Which is what was causing everyone here issues because it isn't obvious that the generated secret has no data without specifically checking for that. (i.e: kubectl get secrets my-secret -o yaml)

For example if you try to mount the valid secret:

apiVersion: v1
kind: Secret
metadata:
  creationTimestamp: 2018-04-15T05:41:34Z
  name: my-secret
  namespace: default
  resourceVersion: "173565"
  selfLink: /api/v1/namespaces/default/secrets/shared-bootstrap-data
  uid: a8558142-406f-11e8-9f89-08002714eb9a
type: Opaque

and then modify it's permissions with defaultMode (i.e: 400) the permissions will not match (I think it gets set to the default)

This is different from the original issue and more to do with the absence of data in the secret.

@stp-ip

This comment has been minimized.

Copy link
Member

commented May 15, 2018

Reproduction as far as I can see.

  • defaultMode: 256
  • secret not empty

Result: 777 /test/secret

apiVersion: v1
kind: Namespace
metadata:
  name: test-secret
---
apiVersion: v1
kind: Secret
metadata:
  name: test-secret
  namespace: test-secret
  labels:
    project: test-secret
stringData:
  secret: "123"
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: test-secret
  namespace: test-secret
spec:
  replicas: 1
  selector:
    matchLabels:
      project: test
  template:
    metadata:
      labels:
        project: test
    spec:
      containers:
      - name: test
        image: ubuntu
        command: [ "sh", "-c", "while true; do stat -c \"%a %n\" /test/secret; sleep 60; done" ]
        volumeMounts:
        - name: secret
          mountPath: /test
      volumes:
      - name: secret
        secret:
          defaultMode: 256
          secretName: test-secret
@lephix

This comment has been minimized.

Copy link

commented May 16, 2018

Permission has problem in our env. Following is the snippet. In the container the /home/xxx/.ssh directory and the file authroized_keys both have a 0777 permission.

       volumeMounts:
       - name: host-keys
         mountPath: /home/xxx/.ssh/     
 - name: host-keys
    secret:
      secretName: ssh-host-keys
      items:
      - key: authorized_keys
        path: authorized_keys
@tombh

This comment has been minimized.

Copy link

commented Jun 21, 2018

There appears to be a recent and more significant reason why you can't change the permissions of a secret mount - a recent CVE issue required that all secret, configMap, downwardAPI and projected volumes mounts be read-only: #58720

So one approach to workaround this is to introduce yet another intermediate volume. You can then use an initContainer to copy from the secret mounted volume to the intermediate container. And then finally mount the intermediate volume that is not restricted to read-only on your application container.

Here is an example: https://github.com/kubernetes/charts/blob/37620d43064f7b24e6591c27fd4d110b30c83cce/stable/rabbitmq-ha/templates/statefulset.yaml#L30

stephenmoloney added a commit to stephenmoloney/flux that referenced this issue Jun 23, 2018

Convert from octal numbers to decimal numbers
Why is this needed?

- From reading this issue, it seems that it would be needed
kubernetes/kubernetes#34982
@kilianc

This comment has been minimized.

Copy link

commented Nov 2, 2018

I have the same problem and I am not sure if the docs are wrong or this is still a bug. Is mode: 384 supposed to set a file pointing to a secret to 0600 ? The behavior I am seeing says no.

@thockin

This comment has been minimized.

Copy link
Member

commented Dec 21, 2018

I flagged this for followup, and finally had some time. My test shows it working -- someone tell me otherwise?

kubectl apply this file:

apiVersion: v1
kind: Secret
metadata:
  name: test-secret-perms
stringData:
  expect_0700: "should be 0700"
  expect_0400: "should be 0400"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-secret-perms
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test-secret-perms
  template:
    metadata:
      labels:
        app: test-secret-perms
    spec:
      containers:
      - name: test
        image: ubuntu
        command: [ "sh", "-c", "while true; do ls -ldH /test/*; sleep 60; done" ]                                                                                         
        volumeMounts:
        - name: secret
          mountPath: /test
      securityContext:
        fsGroup: 123 
      volumes:
      - name: secret
        secret:
          secretName: test-secret-perms
          defaultMode: 256
          items:
            - key: "expect_0700"
              path: "expect_0700"
              mode: 448
            - key: "expect_0400"
              path: "expect_0400"
              # no mode             

What I see in the logs is:

-r--r----- 1 root 123 14 Dec 21 22:44 /test/expect_0400
-rwxr----- 1 root 123 14 Dec 21 22:44 /test/expect_0700

0400 and 0700 are correct. The extra g+r is because I tested the fsGroup at the same time.

@luckymagic7

This comment has been minimized.

Copy link

commented Jan 21, 2019

@thockin

How can I get below(key, path)?

items:
            - key: "expect_0700"
              path: "expect_0700"
              mode: 448
            - key: "expect_0400"
              path: "expect_0400"

my secret describe:

Name:         test
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
id_rsa:      3247 bytes
id_rsa.pub:  745 bytes

Are the keys id_rsa and id_rsa.pub?

@luckymagic7

This comment has been minimized.

Copy link

commented Jan 21, 2019

@thockin
didn't work for me 😢

I applied your above yaml file.
Belows are output:

root@test-secret-perms-996fb7547-n7689:/test# ls -al
total 4
drwxrwsrwt 3 root  123  120 Jan 21 03:26 .
drwxr-xr-x 1 root root 4096 Jan 21 03:26 ..
drwxr-sr-x 2 root  123   80 Jan 21 03:26 ..2019_01_21_03_26_48.920981748
lrwxrwxrwx 1 root root   31 Jan 21 03:26 ..data -> ..2019_01_21_03_26_48.920981748
lrwxrwxrwx 1 root root   18 Jan 21 03:26 expect_0400 -> ..data/expect_0400
lrwxrwxrwx 1 root root   18 Jan 21 03:26 expect_0700 -> ..data/expect_0700

Which k8s version should I use?

my version:

kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:46:06Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T11:55:20Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
@DarrienG

This comment has been minimized.

Copy link

commented Feb 11, 2019

Setting the defaultMode didn't seem to work for me either. We needed a script to run at start, so I ended up copying the keys over to different dirs, and setting permissions to them in that script.

Barbaric, but I guess it works 😬

@petrokashlikov

This comment has been minimized.

Copy link

commented Feb 16, 2019

@thockin
didn't work for me 😢

I applied your above yaml file.
Belows are output:

root@test-secret-perms-996fb7547-n7689:/test# ls -al
total 4
drwxrwsrwt 3 root  123  120 Jan 21 03:26 .
drwxr-xr-x 1 root root 4096 Jan 21 03:26 ..
drwxr-sr-x 2 root  123   80 Jan 21 03:26 ..2019_01_21_03_26_48.920981748
lrwxrwxrwx 1 root root   31 Jan 21 03:26 ..data -> ..2019_01_21_03_26_48.920981748
lrwxrwxrwx 1 root root   18 Jan 21 03:26 expect_0400 -> ..data/expect_0400
lrwxrwxrwx 1 root root   18 Jan 21 03:26 expect_0700 -> ..data/expect_0700

Which k8s version should I use?

my version:

kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:46:06Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T11:55:20Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}

I also faced similar issue and seems it is issue with representation, but actual permissions are fine.
I've set defaultMode: 256 for volume with ssh keys. When I look at /etc/ssh/keys folder which I mounted, permissions doesn't look right
lrwxrwxrwx 1 root root 27 Feb 15 23:29 ssh_host_rsa_key.pub -> ..data/ssh_host_rsa_key.pub lrwxrwxrwx 1 root root 23 Feb 15 23:29 ssh_host_rsa_key -> ..data/ssh_host_rsa_key lrwxrwxrwx 1 root root 31 Feb 15 23:29 ssh_host_ed25519_key.pub -> ..data/ssh_host_ed25519_key.pub lrwxrwxrwx 1 root root 27 Feb 15 23:29 ssh_host_ed25519_key -> ..data/ssh_host_ed25519_key

but you see that it is actually not files, but links to actual files and if you go to that directory, permissions are read only and from original /etc/ssh/keys dir you in fact can't delete files, so there is no write permissions as displayed

root@sftp-5f5cb76dd4-mljww:/etc/ssh/keys/..data# ls -lrt
total 16
-r-------- 1 root root 754 Feb 15 23:29 ssh_host_rsa_key.pub
-r-------- 1 root root 3401 Feb 15 23:29 ssh_host_rsa_key
-r-------- 1 root root 110 Feb 15 23:29 ssh_host_ed25519_key.pub
-r-------- 1 root root 419 Feb 15 23:29 ssh_host_ed25519_key

@var23rav

This comment has been minimized.

Copy link

commented Feb 27, 2019

I was beating my head on this to understand what happening. @thockin Explaned its correctly.
TLDR; But if you, @petrokashlikov can set the permission to the configMap or secret file make use of subPath which wil create the file with the permission specified in defaultMode


Lets take look
eg:-

.
.
volumes:
      - name: test-vol
        secret:
          defaultMode: 0400
          secretName: test-secret
.
.
 containers:
      - name: test-cont
         volumeMounts:
         - mountPath: /root/non_exiting_folder/secret_key_1_new_name
           name: test-secret
           subPath: scret_key_1
          - mountPath: /root/non_exiting_folder/secret_key_2_new_name
           name: test-secret
           subPath: scret_key_2

now if you get into the pod and check the permission

root@***:/root/non_exiting_folder/# ls -la
total 16
drwxr-xr-x    2 root     root          4096 Feb 27 07:22 .
drwx------    1 root     root          4096 Feb 27 07:23 ..
-r--------    1 root     root          3243 Feb 27 07:22 secret_key_1_new_name
-r--------    1 root     root           422 Feb 27 07:22 secret_key_2_new_name
  • drwxr-xr-x 2 root root 4096 Feb 27 07:22 .
    '.' is current folder; which is no_existing_folder created by k8s have the default permission 0755

  • -r-------- 1 root root 3243 Feb 27 07:22 secret_key_1_new_name
    -r-------- 1 root root 422 Feb 27 07:22 secret_key_2_new_name
    The no_existing_folder created by k8s have the default permission 0755
    The secrete file(renamed scret_key_1 -> secret_key_1_new_name) by k8s is having the defaultMode as specified in config
    Note:- If the no defaultMode is specifed, secret key file permission will be 0644 by default


Now comming back to @thockin explantion.
You may have notice that ..data, ..2019_01_21_03_26_48.920981748 folders are missing when you use the subPath config
If we remove the subPath config

eg:

.
.
volumes:
      - name: test-vol
        secret:
          defaultMode: 0400
          secretName: test-secret
.
.
 containers:
      - name: test-cont
         volumeMounts:
         - mountPath: /root/non_exiting_folder/child_folder
           name: test-secret

See the different by getting into the pod

root@***:/root/non_exiting_folder/# ls -la
total 16
total 8
drwxr-xr-x    3 root     root          4096 Feb 27 07:39 .
drwx------    1 root     root          4096 Feb 27 07:39 ..
drwxrwxrwt    3 root     root           120 Feb 27 07:38 child_folder
  • ddrwxr-xr-x 3 root root 4096 Feb 27 07:39 .
    '.' is current folder; which is no_existing_folder created by k8s. Permissions remains same default permission 0755
  • drwxrwxrwt 3 root root 120 Feb 27 07:38 child_folder
    child_folder created by k8s. but the folder permission is 0777 not default permission 0755
/root/non_exiting_folder/ # cd child_folder/
/root/non_exiting_folder/child_folder/ # ls -la
total 4
drwxrwxrwt    3 root     root           120 Feb 27 07:38 .
drwxr-xr-x    3 root     root          4096 Feb 27 07:39 ..
drwxr-xr-x    2 root     root            80 Feb 27 07:38 ..2019_02_27_07_38_59.023449857
lrwxrwxrwx    1 root     root            31 Feb 27 07:38 ..data -> ..2019_02_27_07_38_59.023449857
lrwxrwxrwx    1 root     root            13 Feb 27 07:38 scret_key_1-> ..data/scret_key_1
lrwxrwxrwx    1 root     root            18 Feb 27 07:38 scret_key_2-> ..data/scret_key_2
~/.ssh/test #

Notice that when we removed the subPath config from yaml,

  • new folder '..2019_02_27_07_38_59.023449857' is created
  • new file 'data' symlinked to that timestamp(..2019_02_27_07_38_59.023449857) folder created
  • '..2019_02_27_07_38_59.023449857' hold the mounted data and the files secrete_key_1, secrete_key_2 is symlinked with actual files through ..data(there by timestamp folder) respectively

If we check the file permission

  • lrwxrwxrwx 1 root root 31 Feb 27 07:38 ..data -> ..2019_02_27_07_38_59.023449857
    **..data ** symlink file have a permission 0777 and is symlinked with ..2019_02_27_07_38_59.023449857 folder.
  • drwxr-xr-x 2 root root 80 Feb 27 07:38 ..2019_02_27_07_38_59.023449857
    but ..2019_02_27_07_38_59.023449857 folder has default permission(no defaultMode) 0755 and it holds actula data.
  • lrwxrwxrwx 1 root root 13 Feb 27 07:38 scret_key_1-> ..data/scret_key_1
    scret_key_1 the secret key file have a permission 0777 and is symlinked with ..data/scret_key_1. As said earlier '..data' is a symlink to timestamp folder '..2019_02_27_07_38_59.023449857'

Checking permission for actual data

~/.ssh/test # cd ..data/
~/.ssh/test/..2019_02_27_07_38_59.023449857 # ls -la
total 8
drwxr-xr-x    2 root     root            80 Feb 27 07:38 .
drwxrwxrwt    3 root     root           120 Feb 27 07:38 ..
-rw-r--r--    1 root     root          3243 Feb 27 07:38 scret_key_1
-rw-r--r--    1 root     root           422 Feb 27 07:38 scret_key_2
~/.ssh/test/..2019_02_27_07_38_59.023449857 #
  • -rw-r--r-- 1 root root 3243 Feb 27 07:38 scret_key_1
    The secret key file scret_key_1 has permission specified in defaultMode: 0400 and it holds actula data.

So as i understand if k8s created file or folder automatically(like no_existng_folder in mountPath, no the end file) they will use the default Permission(folder 0755, file 0644).
For the end file like the last part of mountPath, will give 0777(It wont respect defaultMode value unless subPath config is specified), though am not sure why 0777 ??

@afirth

This comment has been minimized.

Copy link

commented May 28, 2019

Many others referenced in a wall of text. I think a common confusion is that the files are linked, so ls will show the link permissions by default.

Simply add -L: ls -laL /path/to/directory/ to dereference the link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
You can’t perform that action at this time.