Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How does CRC handle persistent volume e.g. local or hostPath #728

Closed
morningspace opened this issue Oct 16, 2019 · 13 comments
Closed

How does CRC handle persistent volume e.g. local or hostPath #728

morningspace opened this issue Oct 16, 2019 · 13 comments

Comments

@morningspace
Copy link

When we deploy applications on CRC, some application may need persistent volume. This is true especially for local testing for example using local or hostpath PV which may need manual provisioning. I'd like to know how CRC handle this case.

On the other hand, I've seen some people use MiniShift to do the provisioning by minishift ssh. And, it can easily be integrated into automatic scripts using the format minishift ssh [-- COMMAND] [flags]. Though, I could understand if someone tells me to touch the CRC VM directly is not recommended:-)

Another possible option may be the MiniShift Host Folders which are directories on the host shared between the host and the Minishift VM.

With that, what is the CRC recommendations in such case?

General information

  • OS: Linux / macOS / Windows
  • Hypervisor: KVM / Hyper-V / VirtualBox / hyperkit

CRC version

# Put the output of `crc version`
version: 1.0.0-rc.0+34371d3
OpenShift version: 4.2.0-0.nightly-2019-09-26-192831 (embedded in binary)
@gbraad
Copy link
Contributor

gbraad commented Oct 16, 2019

At the moment there is a small issue with the PVs which will be resolved in a newer version (soon to be available).

@gbraad
Copy link
Contributor

gbraad commented Oct 16, 2019

Host Folders

This will not be available as was possible with Minishift.

@gbraad
Copy link
Contributor

gbraad commented Oct 16, 2019

ssh

This is not available to the user. An alternative approach would be to use oc debug <nodename>. In general, you should not make changes to the node directly).

@praveenkumar
Copy link
Member

@morningspace As part of CRC, we preprovision around 30 PV which can be used for any application which need to have pv.

@morningspace
Copy link
Author

@praveenkumar Ah... so, that's similar to what oc cluster up provides. Is it right? In some cases, it may not be sufficient. Couple of examples as I remember:

  • For local PV, it may require the folder path to be in particular form that aligns w/ these pre-provisioned PVs, but it may require to change the deployment YAML which is annoying in some cases.
  • For hostPath PV, it looks I have to chown or chmod these folders in order to avoid Permission Denied when I deploy some app pods onto that where it will fail to create files if I do not adjust the folder mode and/or ownership.

@praveenkumar
Copy link
Member

@praveenkumar Ah... so, that's similar to what oc cluster up provides. Is it right?

Something like that.

  • For local PV, it may require the folder path to be in particular form that aligns w/ these pre-provisioned PVs, but it may require to change the deployment YAML which is annoying in some cases.

I think you should just have a pvc in your app and it will request for any available pv. That shouldn't be much annoying. like described in the k8s docs https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims and then use in deployment or in pod spec https://kubernetes.io/docs/concepts/storage/persistent-volumes/#claims-as-volumes

  • For hostPath PV, it looks I have to chown or chmod these folders in order to avoid Permission Denied when I deploy some app pods onto that where it will fail to create files if I do not adjust the folder mode and/or ownership.

No the one which we create already have those settings and your application should use it as it is, till now we didn't see any report around having this issue during usage of pv.

@morningspace
Copy link
Author

morningspace commented Oct 21, 2019

@praveenkumar Thanks for your reply and sorry for the confusion. I just checked the 30 crc pre-provisioned PVs, it's hostPath (Type=HostPath). So, that has nothing to do w/ local PV (Type=Local)

Regarding hostPath PV, one thing I did recently was trying to deploy API Connect onto OpenShift v3.11 using oc cluster up, where I got Permission Denied error when deploy pods. It can be resolved by workaround such as using chmod to elevate permissions to these pre-provisioned folders. Some GitHub issues discussed this, e.g.:

I will check that if the same issue happens on crc, and close this issue if not. Thanks!

@praveenkumar
Copy link
Member

@morningspace sure, if you face same issue, please make sure to put all the steps so we can able to reproduce and identify what is missing.

@morningspace
Copy link
Author

@praveenkumar I tried to deploy API Connect onto crc and got the same Permission Denied error. To make it work, I need to manually change the PV folders mode from 770 to 777. Generally, it seems due to the fact that the API Connect pods are using non-root user to create folders or files on these PVs. But, I am still don't know why it requires others group permission.

Besides the two links that I have pasted above where people discussed very similar issue, here're ones from minishift and rook:

Because API Connect is a commercial product, which I could not share here, I will find time and try to figure out way to reproduce it by simple pod, so that you can reproduce easily on your side.

@praveenkumar
Copy link
Member

@morningspace As we discussed in the meeting, It might be the way your application interacting to storage, 770 means your current user of container should able to have enough permission so might be your application use a different user for it.

@morningspace
Copy link
Author

Hi @praveenkumar Sorry for my late response...

770 means your current user of container should able to have enough permission so might be your application use a different user for it.

Yeah, that makes sense to me, however, I go into the container and type whoami, which should be the container user being used, and after I manually change these PV folders owner to be the value of whoami, it works then. So, it appears to me that's the container user, rather than other users. It makes me confused...

@morningspace
Copy link
Author

Anyway, since now I have workaround, I'm going to close this issue for now, and reopen if there's anything new found.

@gbraad
Copy link
Contributor

gbraad commented Nov 18, 2019 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants