New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC kubernetes support #31714

Closed
onorua opened this Issue Mar 7, 2016 · 5 comments

Comments

Projects
None yet
5 participants
@onorua
Contributor

onorua commented Mar 7, 2016

Requests for comments and suggestions

As I can see here you start working on k8s support: https://docs.saltstack.com/en/develop/ref/modules/all/salt.modules.k8s.html so far only labels are supported.
We use k8s for quite a long time, and I've created modules (they are pretty ugly using kubectl because in a time we started, API was frequently changing). API is more or less stable now, and I would like to contribute to salt k8s support. I do not know where to ask them, so decided to create this bug.
List of questions below:

Create/replace/delete of the resources such as rc,pod,service,secret even autoscaling

quite easy, can be done the same way as for labels. I think what we must agree on is do we want to have some simplified notation as in deployment manager or full featured manifest files definition. Right now in our company we use full manifest, because we use pure kubernetes.

Rolling update.

This one is tricky. Right now, it is working like this:

  • create new replication controller (new name already) with new parameters with replicas 0
  • scale up new rc to +1
  • scale down old replication controller to -1 if new pod is up.
    continue until old one is 0 an new one is equal to amount of replicas.

Each iteration is 30 seconds minimum, but generally I would say minute, sometimes it can be more for crazy readiness probes figures.

Problems:

  1. What to do when other client is running the same task in parallel, and executing the same rolling update? We need to somehow keep the state and information about new and old replication controllers, where?
  2. How to prevent blocking other task while we wait for killing of the pod (~ 1 minute for each)
  3. New name of replication controller required, this is not a problem actually, just warning. This can be worked around by "renaming" of RC. But if we will not find the way to store information about current state, we might get into some race conditions.

Please give you feedback, so I could start working on extending the k8s module and state.

@cachedout

This comment has been minimized.

Show comment
Hide comment
@cachedout
Contributor

cachedout commented Mar 7, 2016

@onorua

This comment has been minimized.

Show comment
Hide comment
@onorua

onorua Mar 8, 2016

Contributor

I have been thinking a bit, and decided that we can keep state of the replication controller in annotations for the RC. Same for new name, it can be stored in update-partner annotation, similar to the one kubectl has. The question "how to make it non-blocking" for other tasks - remains open, and it is not even k8s specific it is salt specific I guess.

Contributor

onorua commented Mar 8, 2016

I have been thinking a bit, and decided that we can keep state of the replication controller in annotations for the RC. Same for new name, it can be stored in update-partner annotation, similar to the one kubectl has. The question "how to make it non-blocking" for other tasks - remains open, and it is not even k8s specific it is salt specific I guess.

@titilambert

This comment has been minimized.

Show comment
Hide comment
@titilambert

titilambert Mar 8, 2016

Contributor

Hello !
I think the best (and hardest) way to do this is to load kubectl as a python module. See here: https://blog.filippo.io/building-python-modules-with-go-1-5/#buildingagopythonmodule
I already tried, but this is not really easy with K8S...
With this way, we are sure to stick with the last version of Kubernetes et we reduce the maintenance of this module ;)
I'm agree to keep all necessary data as annotation in K8S objects 👍
About rolling update, I think we need to handle blocking and none blocking behavior. The blocking behavior should be easy and the default if you can load Kubernetes as python module.

I don't have time to work on it right now, but I can help/review if needed.

Contributor

titilambert commented Mar 8, 2016

Hello !
I think the best (and hardest) way to do this is to load kubectl as a python module. See here: https://blog.filippo.io/building-python-modules-with-go-1-5/#buildingagopythonmodule
I already tried, but this is not really easy with K8S...
With this way, we are sure to stick with the last version of Kubernetes et we reduce the maintenance of this module ;)
I'm agree to keep all necessary data as annotation in K8S objects 👍
About rolling update, I think we need to handle blocking and none blocking behavior. The blocking behavior should be easy and the default if you can load Kubernetes as python module.

I don't have time to work on it right now, but I can help/review if needed.

@onorua

This comment has been minimized.

Show comment
Hide comment
@onorua

onorua Mar 8, 2016

Contributor

Actually we don't need to built in kubectl to make sure we use latest version, we can use their swagger interface to build python up client out of this:
http://www.devoperandi.com/python-client-for-kubernetes/
blocking behavior is easy, I can do this within a week, but I do not see the point of doing it, as there is no point in it IMO. Let me elaborate:
Lets imagine, you have 3 RCs, with 5 replicas each. You do mass upgrade and all 3 RCs need to be upgraded. You do rolling update, and simple math tells you:
3 * 5 = 15 minutes
while if we could do this in parallel, we can do it within 5 minutes. The bigger your deployment the bigger delay.
I do not know salt from internals really well, but there must be some way to perform this non-blocking on in parallel.

Contributor

onorua commented Mar 8, 2016

Actually we don't need to built in kubectl to make sure we use latest version, we can use their swagger interface to build python up client out of this:
http://www.devoperandi.com/python-client-for-kubernetes/
blocking behavior is easy, I can do this within a week, but I do not see the point of doing it, as there is no point in it IMO. Let me elaborate:
Lets imagine, you have 3 RCs, with 5 replicas each. You do mass upgrade and all 3 RCs need to be upgraded. You do rolling update, and simple math tells you:
3 * 5 = 15 minutes
while if we could do this in parallel, we can do it within 5 minutes. The bigger your deployment the bigger delay.
I do not know salt from internals really well, but there must be some way to perform this non-blocking on in parallel.

@SEJeff

This comment has been minimized.

Show comment
Hide comment
@SEJeff

SEJeff Sep 14, 2017

Member

There is a kubernetes module and the associated state module which does all of this for you. As far as I can tell, the k8s module should be deprecated.

I think this can be closed if you're ok with using the kuberenetes module.

Member

SEJeff commented Sep 14, 2017

There is a kubernetes module and the associated state module which does all of this for you. As far as I can tell, the k8s module should be deprecated.

I think this can be closed if you're ok with using the kuberenetes module.

@onorua onorua closed this Sep 16, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment