Skip to content

Conversation

@fabianvf
Copy link
Member

@fabianvf fabianvf commented Nov 2, 2018

Currently just the proposal.

To view the rendered version: https://github.com/fabianvf/openshift-restclient-python/blob/apply/doc/proposals/apply.md

@openshift-ci-robot openshift-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Nov 2, 2018
@fabianvf
Copy link
Member Author

fabianvf commented Nov 2, 2018

ping @willthames

@willthames
Copy link
Contributor

We probably need to be careful about what patch we send (this is probably obvious in the kubectl code) - we need to send a json patch to allow keys to be deleted as well as added - one of the drawbacks of the existing approach is the lack of ability to delete keys without using force, and force doesn't work with all kinds (e.g. force on a Service tends to cause a 422 even without a change)

@willthames
Copy link
Contributor

Also, should we make the lastAppliedConfiguration annotation configurable? I'd rather not use kubectl's annotation, but would agree with making it configurable so that users can choose kubectl's annotation if they wish

Copy link
Member

@djzager djzager left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comments for clarity.


## Problem
Current merge/patch strategy in the Ansible modules is not sufficient for a variety of reasons.
1. There is no way to remove a field from the Ansible module without doing a full replace
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"There is no way to remove a field from a Kubernetes object using the Ansible module without doing a full replace of the object."

That's how I read it, but it wasn't clear. Suggest making the subject more clear.


The basic algorithm is as follows:

1. `GET` the current object
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be helpful if you kept the language consistent. My belief is that when you say "current object" (from 1), "existing one" (from 3), and "current state of the object" (from 5) you are referring to the same thing.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same thing with "new object" (from 3) and "desired definition" (from 4 + 5).

@fabianvf
Copy link
Member Author

fabianvf commented Nov 5, 2018

@djzager is the new phrasing more clear?

@fabianvf
Copy link
Member Author

fabianvf commented Nov 5, 2018

@willthames

We probably need to be careful about what patch we send (this is probably obvious in the kubectl code) - we need to send a json patch to allow keys to be deleted as well as added - one of the drawbacks of the existing approach is the lack of ability to delete keys without using force, and force doesn't work with all kinds (e.g. force on a Service tends to cause a 422 even without a change)

Yes, the proper way to send a patch with deletions is to set the deleted field value to `null. This will definitely be included in any implementation of apply we move forward with.

Also, should we make the lastAppliedConfiguration annotation configurable? I'd rather not use kubectl's annotation, but would agree with making it configurable so that users can choose kubectl's annotation if they wish

It would be pretty trivial to make the annotation configurable (and I'm definitely in favor of it, no harm in allowing it), I'm curious about the opposition to overlapping with kubectl's annotation is though, it seems to me like having kubectl and the python client storing state in separate annotations could end up being fairly frustrating, since at any point one of them would have an outdated lastAppliedConfiguration.

@willthames
Copy link
Contributor

I'm curious about the opposition to overlapping with kubectl's annotation is though

My opposition is that people shouldn't be using multiple tools to manage the same state. However, giving it some thought, my reasoning was that kubectl warns you if it doesn't have the lastAppliedConfiguration annotation, but of course using a different annotation wouldn't replace the kubectl one. Seeing two different lastAppliedConfigurations might at least warn users that two different tools are being used though

Another reason is that we should be explicit about what tool is managing state, so that when viewing a resource, people don't wonder why there's a kubectl annotation when they don't use kubectl.

@djzager
Copy link
Member

djzager commented Nov 6, 2018

Another reason is that we should be explicit about what tool is managing state, so that when viewing a resource, people don't wonder why there's a kubectl annotation when they don't use kubectl.

Speaking strictly from the perspective of a potential user. I would be very surprised if I used kubectl apply, apply from the dynamic client, and learned that they are doing different things. I agree that kubectl.kubernetes.io/last-applied-configuration belongs to kubectl. I wonder though, if by default, we should go for least surprising.

@jwmatthews
Copy link
Member

Another reason is that we should be explicit about what tool is managing state, so that when viewing a resource, people don't wonder why there's a kubectl annotation when they don't use kubectl.

Speaking strictly from the perspective of a potential user. I would be very surprised if I used kubectl apply, apply from the dynamic client, and learned that they are doing different things. I agree that kubectl.kubernetes.io/last-applied-configuration belongs to kubectl. I wonder though, if by default, we should go for least surprising.

I'd prefer to see us have compatibility with kubectl apply, I'd like to provide the means for a user to use the dynamic client on resources that were previously manipulated by kubectl apply and vice versa. I don't think we should enforce a distinction of using separate tools unless we run into a technical reason that forces this.

I do recognize that reusing the kubectl annotation feels a little strange and perhaps it could introduce a small amount of surprise/confusion to someone inspecting a resource who never used kubectl on it, yet I think defaulting to use same annotation would create a better user experience for most users.

@openshift-ci-robot openshift-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Nov 16, 2018
Ensure unchanged resources are returned as a k8s object
rather than a string

Ensure last_applied annotations aren't included in the diffs
Cope with resources without last_applied annotation, and add
the annotation to such resources too
@fabianvf
Copy link
Member Author

fabianvf commented Jan 15, 2019

Two documents I found that describe the algorithm apply uses:

https://docs.google.com/document/d/1q1UGAIfmOkLSxKhVg7mKknplq3OTDWAIQGWMJandHzg/edit#heading=h.xgjl2srtytjt

https://docs.google.com/document/d/1-SIcJArYuFE99NbfiOJrkfxCYuLTb9wmbS0HnPMYLEo/edit#heading=h.xgjl2srtytjt

It seems like these documents have a lot of information about merging with a schema (ie strategic merge patch), but obviously we can't do that. I'm hoping as I read more it will become clear to me how they've implemented apply for things like resources defined by CustomResourceDefinitions that have no associated OpenAPI schema.

@fabianvf
Copy link
Member Author

In the first document I found this statement for apply with no schema: "A possible degraded non-schema mode: replace list items instead of merging" - so I think we have the right approach there.

@willthames
Copy link
Contributor

Those documents aren't public but I've requested access.

I've added some more test cases. Really keen to see more tests being added if anyone has any ideas.

willthames@9616cc7

@fabianvf
Copy link
Member Author

I'm going to go ahead and merge this so we can start testing it out, leaving this PR hanging isn't doing anyone any good

@fabianvf fabianvf merged commit a6aaf5b into openshift:master Mar 11, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants