New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes: persistent volumes #19

Open
bradrydzewski opened this Issue Oct 24, 2018 · 8 comments

Comments

Projects
None yet
3 participants
@bradrydzewski
Copy link
Member

bradrydzewski commented Oct 24, 2018

this helps ensure all pods have access to a shared workspace and can run on the same machine. It also helps us implement temp_dir volumes (as defined in the drone yaml). Persistent volumes are currently disabled while we try to figure out an approach to scheduling pods on specific nodes.

@based64god

This comment has been minimized.

Copy link

based64god commented Dec 10, 2018

Has any progress been made on this? I'd be interested in taking a crack at it if it hasn't yet been touched.

@bradrydzewski

This comment has been minimized.

Copy link
Member

bradrydzewski commented Dec 10, 2018

no progress really, but would love some assistance :) I added some code to create the pv and pvc data objects but never had time to finish the job
https://github.com/drone/drone-runtime/blob/master/engine/kube/volume.go

@zetaab

This comment has been minimized.

Copy link
Contributor

zetaab commented Dec 11, 2018

I am checking this currently. However, the problem that I see is that we need to have ReadWriteMany volumes in kubernetes cluster. At least we does not have those. Need to install something like https://github.com/gluster/gluster-kubernetes

@zetaab zetaab referenced a pull request that will close this issue Dec 11, 2018

Open

implement storageclass #27

@bradrydzewski

This comment has been minimized.

Copy link
Member

bradrydzewski commented Dec 11, 2018

@zetaab a ReadWriteMany volume is not required because all Pipeline steps execute on the same node, using a shared workspace (make sure you pass the --kube-node flag when testing with the cli). This means the persistent volume needs to be of type HostPath.

@zetaab

This comment has been minimized.

Copy link
Contributor

zetaab commented Dec 11, 2018

sharing hostpath volumes in kubernetes is not recommended. Platforms like openshift it is not even allowed (without modifying things).

This can be maybe used in future for hostpath things (after dynamic provisioning is supported): https://kubernetes.io/docs/concepts/storage/storage-classes/#local

currently the problem is that if namespace x takes hostpath /foo namespace y can do the same. So it is kind of isolation issue between namespaces. Hopefully that localstorage dynamic provisioner will solve that issue somehow

@zetaab

This comment has been minimized.

Copy link
Contributor

zetaab commented Dec 11, 2018

@bradrydzewski btw how you are planning to use that --kube-node thing? When new build is going to start you need to just define one node where everything should be executed? You need to know which node has enough resources to execute that beforehand? Or should user define in which node build is always running?

@bradrydzewski

This comment has been minimized.

Copy link
Member

bradrydzewski commented Dec 11, 2018

The --kube-node parameter is only required from the command line so you can more closely emulate how drone works. Under the hood drone ensures all pipeline steps are assigned to the same node. With a persistent volume claim this would no longer be necessary.

I think the default volume type should be HostPath because installing a volume plugin should not be a requirement for using Drone. But we can certainly give teams the option to use alternate volume plugin types if that want or need to.

@zetaab

This comment has been minimized.

Copy link
Contributor

zetaab commented Dec 11, 2018

Yes I agree with you, there should be hostpath(in future this can be moved to pvc using local hostpath dynamic provisioner) and pvc option. Also hostpath should be maybe the default one, because installing things like RWX volumes is not that easy. RWX volume is needed if people are executing two pipeline steps simultaneously, otherwise RWO is enough. However, it might be quite slow to execute pipelines with RWO because detaching/attaching volume to each step takes time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment