Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

configMap and secrets volumeMount are always mounted readOnly in 1.9.6 #62099

Closed
primeroz opened this issue Apr 4, 2018 · 26 comments

Comments

Projects
None yet
@primeroz
Copy link

commented Apr 4, 2018

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:
After upgrading from 1.9.4 to 1.9.6 configMap and secrets volumes are always mounted ReadOnly even when the deployment specs don't set the option and "kubectl describe pod" show the mount ad rw

Deployment specs and kubectl describe show RW:

$ kubectl get deployment -n infra-services ldaps-proxy -o yaml | egrep -A 6 "volumeMounts:"
        volumeMounts:
        - mountPath: /etc/openldap/slapd.conf
          name: config
          subPath: slapd.conf
        - mountPath: /etc/openldap/ssl
          name: secrets
      dnsPolicy: ClusterFirst


 $ kubectl describe pod -n infra-services ldaps-proxy-99b569895-f7xpm               
...
    Mounts:
      /etc/openldap/slapd.conf from config (rw)
      /etc/openldap/ssl from secrets (rw)

Docker inspect and actual mountpoint inside of the running container show RO

kubectl exec -t -i -n infra-services ldaps-proxy-99b569895-f7xpm sh
sh-4.2# mount | grep etc                                                                                                                  
...                                   
tmpfs on /etc/openldap/ssl type tmpfs (ro,relatime,seclabel)                                          
/dev/mapper/vg_core-kube on /etc/openldap/slapd.conf type xfs (ro,relatime,seclabel,attr2,inode64,noquota)

docker inspect
 "Binds": [
 "/var/lib/kubelet/pods/2383afb0-37df-11e8-b64a-525400d41f7e/volume-subpaths/config/slapd/0:/etc/openldap/slapd.conf:ro,Z,rslave",
 "/var/lib/kubelet/pods/2383afb0-37df-11e8-b64a-525400d41f7e/volumes/kubernetes.io~secret/secrets:/etc/openldap/ssl:ro,Z,rslave",
...
...
 ],

What you expected to happen:

Mountpoints should be RW as they used to be at least up to 1.9.4 ( I never went through 1.9.5 )

How to reproduce it (as minimally and precisely as possible):

Have a 1.9.6 cluster, apply the following spec and verify that mounts are RO rather than RW

apiVersion: v1
data:
  file.conf: "i am a file"
kind: ConfigMap
metadata:
  labels:
    app: test-mount-ro
  name: test-mount-ro
  namespace: kube-system
---
apiVersion: v1
data:
  secret.conf: VGVzdCBzZWNyZXQK
kind: Secret
metadata:
  labels:
    app: test-mount-ro
  name: test-mount-ro
  namespace: kube-system
type: Opaque
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: test-mount-ro
  name: test-mount-ro
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test-mount-ro
  template:
    metadata:
      labels:
        app: test-mount-ro
    spec:
      containers:
      - args:
        - "3600"
        command:
        - sleep
        image: busybox
        imagePullPolicy: Always
        name: busybox
        resources:
          limits:
            cpu: 100m
            memory: 250Mi
          requests:
            cpu: 100m
            memory: 250Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /tmp/config
          name: config
          subPath: file.conf
          readOnly: false
        - mountPath: /tmp/secret
          name: secrets
          subPath: secret.conf
          readOnly: false
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      volumes:
      - configMap:
          defaultMode: 0666
          name: test-mount-ro
        name: config
      - name: secrets
        secret:
          defaultMode: 0666
          secretName: test-mount-ro

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:21:50Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6+coreos.0", GitCommit:"c2aac694c2c7373dfe3ad81f47fc3cbc70a5a8fa", GitTreeState:"clean", BuildDate:"2018-03-21T21:54:22Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration:

Baremetal and KVM Vms

  • OS (e.g. from /etc/os-release):
NAME="Container Linux by CoreOS"
ID=coreos
VERSION=1632.3.0
VERSION_ID=1632.3.0
BUILD_ID=2018-02-14-0338
PRETTY_NAME="Container Linux by CoreOS 1632.3.0 (Ladybug)"
ANSI_COLOR="38;5;75"
HOME_URL="https://coreos.com/"
BUG_REPORT_URL="https://issues.coreos.com"
COREOS_BOARD="amd64-usr"
  • Kernel (e.g. uname -a):
Linux worker01.dev3.london.k8s.mintel.ad 4.14.19-coreos #1 SMP Wed Feb 14 03:18:05 UTC 2018 x86_64 Intel(R) Xeon(R) CPU E5320 @ 1.86GHz GenuineIntel GNU/Linux
  • Install tools: Matchbox
  • Others:
    It used to work fine on 1.9.4 , i upgraded to 1.9.6 due to #61080 and problem started.

Tested with minikube 1.9.3 and it works as expected , the configmap and secrets are mounted RW

@dims

This comment has been minimized.

Copy link
Member

commented Apr 4, 2018

/sig storage

@k8s-ci-robot k8s-ci-robot added sig/storage and removed needs-sig labels Apr 4, 2018

@gnufied

This comment has been minimized.

Copy link
Member

commented Apr 4, 2018

This was intentional. More details https://github.com/kubernetes/kubernetes/pull/60258/files . If you really want configMap or secrets to be rw then you can disable the feature gate - ReadOnlyAPIDataVolumes.

@liggitt

This comment has been minimized.

Copy link
Member

commented Apr 5, 2018

making atomic writer volumes (secret, configmap, downwardAPI, projected) readonly was part of the fix for #60814

Fix impact:
Secret, configMap, downwardAPI and projected volumes will be mounted as read-only volumes. Applications that attempt to write to these volumes will receive read-only filesystem errors. Previously, applications were allowed to make changes to these volumes, but those changes were reverted at an arbitrary interval by the system. Applications should be re-configured to write derived files to another location.

/close

@primeroz

This comment has been minimized.

Copy link
Author

commented Apr 5, 2018

Ok Great, i must have missed it in the changelog . Already did work around it so all good

thanks!

@nitindwivedicloud

This comment has been minimized.

Copy link

commented Apr 11, 2018

@primeroz - what is the work aboud?

-Thanks

@komljen

This comment has been minimized.

Copy link
Contributor

commented Apr 16, 2018

My approach is to copy the config map to emptyDir volume if I need rw capabilities helm/charts@e4b7d0b

@mingfang

This comment has been minimized.

Copy link

commented Apr 25, 2018

I don't agree with this.
I should have the option to control readOnly flag.
In fact when I describe the pod the confmap and secrets do show rw.

@anuraaga

This comment has been minimized.

Copy link

commented May 18, 2018

Hello - I totally agree with config maps being readonly. But it was extremely surprising for this to happen within a minor version upgrade (1.9.x release). Currently we use GKE with automatic upgrades since these sort of minor version upgrades seemed safe, but this time the upgrade broke an old version of the grafana helm chart we were using - and indeed going from rw to ro is a breaking change IMO.

So just wondering is there a semantic versioning policy for Kubernetes to prevent breaking changes on minor upgrade? If there is, I think this issue was closed without resolution, if not no worries.

@jsravn

This comment has been minimized.

Copy link
Contributor

commented May 30, 2018

@anuraaga It does (https://github.com/thtanaka/kubernetes/blob/master/docs/design/versioning.md), but unfortunately the policy was ignored in this case.

@liggitt

This comment has been minimized.

Copy link
Member

commented May 30, 2018

But it was extremely surprising for this to happen within a minor version upgrade (1.9.x release). Currently we use GKE with automatic upgrades since these sort of minor version upgrades seemed safe, but this time the upgrade broke an old version of the grafana helm chart we were using - and indeed going from rw to ro is a breaking change IMO.

As noted in #62099 (comment), the breaking change was required to close a significant security vulnerability.

@ricklagerweij

This comment has been minimized.

Copy link

commented Jun 1, 2018

@liggitt you should be able to control the readOnly option and accept the security vulnerability. Default should be set to true if this is preferred, but you should be able to take the risk with this breaking change.

@anuraaga

This comment has been minimized.

Copy link

commented Jun 1, 2018

Thanks for the info, I can see the security implications here. But considering the fact that it is completely breaking, I still wonder whether a minor version update is appropriate for it, even when there is a fallback. It might be but needs to be coordinated with the major cloud providers, I follow GKE release notes continuously but there was no mention that an auto-upgrade would introduce breaking changes. This doesn't seem natural to me, but perhaps GKE's auto-upgrade process can be blamed.

@liggitt

This comment has been minimized.

Copy link
Member

commented Jun 1, 2018

Default should be set to true if this is preferred, but you should be able to take the risk with this breaking change.

Setting the feature gate ReadOnlyAPIDataVolumes=false allows just that as noted in the release notes - https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.9.md#changelog-since-v193

@ricklagerweij

This comment has been minimized.

Copy link

commented Jun 1, 2018

ReadOnlyAPIDataVolumes=true|false (DEPRECATED - default=true)

Running k8s 1.10.3

@jsravn

This comment has been minimized.

Copy link
Contributor

commented Jun 1, 2018

@anuraaga is being nice about it but I agree with him completely, and probably more strongly. Breaking your contract with users is about the worse thing an open source project can do. Breaking change in a patch version is never the right choice, except in very extreme situations. This should have been opt-in not opt-out. Release notes are not good enough either when the versioning docs imply we can do auto updates to patch versions safely, like GKE does.

@liggitt

This comment has been minimized.

Copy link
Member

commented Jun 2, 2018

Breaking change in a patch version is never the right choice, except in very extreme situations.

I agree with you. Preventing data loss and closing severe security vulnerabilities are among the few reasons such a break would even be considered, and both of those factors were part of this decision.

@anuraaga

This comment has been minimized.

Copy link

commented Jun 2, 2018

I just wonder whether the proper communication was taken with the cloud vendors. It's a hard decision to make a breaking change with a patch release but at least can be better if users are made aware, at least in e.g., the GKE release notes. If there's a process for such a notification via the cloud vendors, I think it will allow people to worry less when using auto-upgrade. Or is the ball in their court for establishing such a process?

@atombender

This comment has been minimized.

Copy link
Contributor

commented Jun 3, 2018

I'm dismayed that this option was broken while still silently allowing readOnly: false. If this feature is now unsupported, the API machinery should now refuse a pod spec when a secret or configmap has readOnly: false and it won't be enforced.

@dims

This comment has been minimized.

Copy link
Member

commented Jun 3, 2018

@liggitt please see last comment, do we need to do anything by the time we ship 1.11?

@liggitt

This comment has been minimized.

Copy link
Member

commented Jun 3, 2018

do we need to do anything by the time we ship 1.11?

No, rejecting previously persisted API objects as invalid would cause far more compatibility problems. It would prevent all updates to those objects, including those required to delete them (removing finalizers, etc).

Additionally, serialization of that particular field does not currently distinguish between "unset" and "explicitly set to false" (JSON omits the readOnly field in both cases, protobuf persists readOnly=0 in both cases). Changing that could potentially be done in the future, but would require significant care to avoid invalidating already-persisted data

@igoratencompass

This comment has been minimized.

Copy link

commented Jun 6, 2018

Fix impact:
Secret, configMap, downwardAPI and projected volumes will be mounted as read-only volumes. Applications that attempt to write to these volumes will receive read-only filesystem errors. Previously, applications were allowed to make changes to these volumes, but those changes were reverted at an arbitrary interval by the system. Applications should be re-configured to write derived files to another location.

Wonder what happens if the app can not be re-configured to write derived files to another location? like in case of hosting a third party app that expects to find a config file in specific location and that location only? Same goes with writes and creating directories in that specific path.

@weisjohn

This comment has been minimized.

Copy link

commented Aug 27, 2018

I just got bit hard by this, especially the silent failure with the readOnly: false config.

@weisjohn

This comment has been minimized.

Copy link

commented Aug 27, 2018

For those encountering this, I would recommend using an init container to copy over the contents of the secret / config map directory into a shared empty dir: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/

@cpanato

This comment has been minimized.

Copy link

commented Aug 28, 2018

@liggitt Hello, I still see the ReadOnlyAPIDataVolumes feature gate in the release 1.11 it should not be removed?

and just to clarify this readonly will be a default? cant we set that to false?

thanks

@liggitt

This comment has been minimized.

Copy link
Member

commented Aug 28, 2018

I still see the ReadOnlyAPIDataVolumes feature gate in the release 1.11 it should not be removed?

and just to clarify this readonly will be a default? cant we set that to false?

that is still present in 1.11 only to satisfy the deprecation period for kubelet CLI flags (6 months or 1 release, whichever is longer). it will be removed in 1.12

@cpanato

This comment has been minimized.

Copy link

commented Aug 28, 2018

ok, thanks @liggitt
asking because the comment says 1.11

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.