New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

subPath in volumemount creates a directory instead of a file specified in configmap #62156

Open
Miyurz opened this Issue Apr 5, 2018 · 27 comments

Comments

Projects
None yet
@Miyurz
Copy link

Miyurz commented Apr 5, 2018

Is this a BUG REPORT or FEATURE REQUEST?:
Bug Report

/kind bug

What happened:
While trying to deploy a kubernetes helm chart with configmap, service and deployment.yaml, Its observed that the properties file, my.properties isn't getting created as specified in the subPath in deployment.yaml.Infact, a directory with the same name is created.

If configmap is deleted and then created again, it works intermittently by creating the properties file but it always fails with the helm chart installation.

deployment.yaml snippet (I can provide the full copy if you need)

      volumeMounts:
       - name: my-store
         mountPath: /mystore
       - name: my-config   
         mountPath: /var/lib/cattle/etc/my.properties
         subPath: my.properties
     imagePullSecrets: 
     - name: mydatasecret
     volumes:
     - name: my-store
       gcePersistentDisk:
         pdName: my-staging-store-disk
         fsType: ext4
     - name: my-config
       configMap:
         name: my-config

cm.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: my-config
data:
  my.properties: |
    api.ui.index: "https://my.io/frontend"
    server.default.access.grant: "true"
    my.api.ui.enabled: "true"

I am not sure if I should attribute this bug completely to the helm and ask them to look over since it has been reported before and closed without a concrete solution:
See: #45613
#46839
Another related bug: #54514

What you expected to happen:
Properties file should have been available as a file with all the properties in it in the path specified in volumeMount.

How to reproduce it (as minimally and precisely as possible):
You can use the deployment yaml and the config yaml I provided above

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:21:50Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.7-gke.1", GitCommit:"192ccad06d24af9828cbf42330e1d915cb586406", GitTreeState:"clean", BuildDate:"2018-01-31T21:39:04Z", GoVersion:"go1.8.3b4", Compiler:"gc", Platform:"linux/amd64"}

  • Cloud provider or hardware configuration:
    GKE

  • OS (e.g. from /etc/os-release):
    NAME="Ubuntu"
    VERSION="16.04.3 LTS (Xenial Xerus)"
    ID=ubuntu
    ID_LIKE=debian
    PRETTY_NAME="Ubuntu 16.04.3 LTS"
    VERSION_ID="16.04"
    HOME_URL="http://www.ubuntu.com/"
    SUPPORT_URL="http://help.ubuntu.com/"
    BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
    VERSION_CODENAME=xenial
    UBUNTU_CODENAME=xenial

  • Kernel (e.g. uname -a):
    Linux gke-maya-staging-cluster-default-pool-cc65d8e5-fcjw 4.13.0-1007-gcp #10-Ubuntu SMP Fri Jan 12 13:56:47 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

@Miyurz

This comment has been minimized.

Copy link
Author

Miyurz commented Apr 5, 2018

/sig storage

@liggitt

This comment has been minimized.

Copy link
Member

liggitt commented Apr 5, 2018

duplicate of #61545, I think

@liggitt

This comment has been minimized.

Copy link
Member

liggitt commented Apr 5, 2018

oh, actually, you don't have a key in your configmap named my.properties... the subpath needs to refer to an actual key in your config map (in the example above, myProperties)

@Miyurz

This comment has been minimized.

Copy link
Author

Miyurz commented Apr 6, 2018

@liggitt my bad! it was a typo. its my.properties and not myProperties

@msau42

This comment has been minimized.

Copy link
Member

msau42 commented Apr 7, 2018

I don't think it's a dup? The paths don't look like they overlap to me...

@msau42

This comment has been minimized.

Copy link
Member

msau42 commented Apr 7, 2018

Also note that the cluster version is 1.8.7, which doesn't have the security fixes

@msau42

This comment has been minimized.

Copy link
Member

msau42 commented Apr 7, 2018

@Miyurz can you clarify, do you still see the issue after fixing the typo?

@Miyurz

This comment has been minimized.

Copy link
Author

Miyurz commented Apr 10, 2018

@msau42 Yes, its still there after fixing the typo.

@Miyurz

This comment has been minimized.

Copy link
Author

Miyurz commented Apr 10, 2018

For now, as a resort , I apply the configmap first, pause followed by the deployment to let it work right.

@msau42

This comment has been minimized.

Copy link
Member

msau42 commented Apr 10, 2018

Oh I see the problem. The ConfigMap needs to be created first before the Deployment. The reason is because when we are setting up the volumes, if the subpath target doesn't exist, we will create a directory for it.

@liggitt

This comment has been minimized.

Copy link
Member

liggitt commented Apr 10, 2018

The ConfigMap needs to be created first before the Deployment. The reason is because when we are setting up the volumes, if the subpath target doesn't exist, we will create a directory for it.

I wouldn't have expected the configmap/deployment order to matter... doesn't the configmap get resolved as part of the kubelet setting up the volumes for the pod?

@msau42

This comment has been minimized.

Copy link
Member

msau42 commented Apr 10, 2018

Oh... that's a good point. Hm I wonder if there's some timing issue where the configmap volume is not initially populated

@msau42

This comment has been minimized.

Copy link
Member

msau42 commented Apr 10, 2018

BTW, using subpath with configmap volumes (and secret, downward api, projected), has the limitation that updates to the API objects won't be reflected in the subpath.

@Miyurz

This comment has been minimized.

Copy link
Author

Miyurz commented Apr 11, 2018

This issue occurs every time I try to run the helm chart that has deployments,service and configmap objects in templates. My guess is that helm applies all the resources at once and hence the issue. Right now this component is being installed by following the order strictly(apply configmap, pause and then apply deployment).

@Miyurz

This comment has been minimized.

Copy link
Author

Miyurz commented May 8, 2018

Hello team, do we have any update here yet ?

@msau42

This comment has been minimized.

Copy link
Member

msau42 commented May 8, 2018

@Miyurz I tried reproducing this many times on a gke 1.8.8 cluster, but didn't hit the issue. I created the Pod first, verified that the Pod volume setup is waiting for the configmap to be created, then the Configmap. Do you still encounter this regularly?

@krishtk

This comment has been minimized.

Copy link

krishtk commented May 30, 2018

I am getting the same issue on my aws setup while on my local mac it works fine. I use the same scripts and images on both these setups.

AWS kubernetes version:
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:22:21Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T11:55:20Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}

Local kubernetes version:
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.1", GitCommit:"d4ab47518836c750f9949b9e0d387f20fb92260b", GitTreeState:"clean", BuildDate:"2018-04-13T22:29:03Z", GoVersion:"go1.9.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

Is there at least a workaround for this problem?

@Miyurz

This comment has been minimized.

Copy link
Author

Miyurz commented Jul 24, 2018

@msau42 Yes, its still happening.

@msau42

This comment has been minimized.

Copy link
Member

msau42 commented Jul 25, 2018

@Miyurz is this a public helm chart that I could try out? I manually tried to reproduce the issue by repeatedly creating a Pod first and Configmap later, but didn't hit the issue.

@anderson4u2

This comment has been minimized.

Copy link

anderson4u2 commented Sep 17, 2018

@Miyurz did you check the helm chart is in the same namespace as the configmap? I just found my mistake, might be of help to you.

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Dec 17, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@zipper97412

This comment has been minimized.

Copy link

zipper97412 commented Dec 24, 2018

I have exactly the same problem with secret mount too

@ismailbaskin

This comment has been minimized.

Copy link

ismailbaskin commented Jan 16, 2019

+1

@Miyurz

This comment has been minimized.

Copy link
Author

Miyurz commented Jan 21, 2019

@anderson4u2 Its still an issue!

@msau42

This comment has been minimized.

Copy link
Member

msau42 commented Jan 22, 2019

It would be helpful if you could provide a pod spec and repro steps.

@vharsh

This comment has been minimized.

Copy link

vharsh commented Jan 23, 2019

I hit the same issue while mounting a binary file today.

  1. I created a config-map, kubectl create configmap random-image --from-file=hello-world.png ( binaryData is supported on my setup, i.e. I have k8s 1.10+.)
  2. I mounted the config-map as a volume in my deployment, as documented.
  volumes:
    - name: config-image
      configMap:
        name: random-image
      volumeMounts:
      - name: config-image
        mountPath: /etc/random/hello-world.png
        subPath: hello-world.png

I can cd into /etc/random/hello-world.png

Maybe this should go in a separate issue, I am not sure.
I'll try to reproduce this one locally by compiling the with the latest kubernetes to confirm, I'll look into ./pkg/volume/configmap now

@neoakris

This comment has been minimized.

Copy link

neoakris commented Jan 30, 2019

This isn't a Kubernetes issue this is an issue with your code.

      volumeMounts:
      - name: config-volume
        mountPath: /etc/desiredfilename.txt #desired filename
        subPath: arbitrarynamethatmustmatch #Just make path and subPath match
  volumes:
    - name: config-volume
      configMap:
        name: test-configmap
        items:
        - key: data-1 #file name in source control if using --from-file, or key name of secret yaml
          path: arbitrarynamethatmustmatch #Just make path and subPath match

If you want it to make a file, your configmap or secret statement needs to have an items: -key, path block, where you explicitly specify every key.
There's other ways of doing it, I really need to make a SO post on the 3 good ways of doing this, I'll give it a shot this weekend if I have time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment