Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: "kn service apply" #964

Merged
merged 10 commits into from
Nov 2, 2020
Merged

Conversation

rhuss
Copy link
Contributor

@rhuss rhuss commented Aug 3, 2020

This implementation reuses the kubectl.kubernetes.io/last-applied-configuration for storing the original configuration so that kubectl tooling can be applied, too.

Only client-side patching is implemented as it's not clear to me whether server-side patching is implemented as part of the Knative API contract.

An initial kn service apply for creating a service works as expected, but a subsequent kn service apply fails with a patch conflict error

The following challenges are still open:

  • As we are using a structured approach when building up a KService, when serializing this so a JSON string to be added to the annotation mentioned above, fields that are typed to a struct (not a pointer to struct) are always present. This includes createdTimestamp or resources in the container spec. kubectl which uses always Unstructured doesn't suffer from this problem.
  • The name of a container in the PodSpec is marked as mandatory for Kubernetes, but is used optional in Knative (it will be autonamed "user-container" if not provided. So it is always present (even as empty string) when deserializing to JSON from a Service struct, indicating the intention of the user to set it to an empty string (which actually is not the case as we never specify that name). That very likely is one of the causes of the patch conflict (as the backend set it to "user-container").

The idea is that we converted a Service to an Unstructured object and remove all map entries from which we know that they have not been specified. This work is still on-going.

I added the PR nevertheless as Draft as I'm now on PTO for the next two weeks and very likely can't continue on this PR until then. If someone feels fancy and might take a look, feel free to work on this PR.

Also no tests have been implemented yet.

@knative-prow-robot knative-prow-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Aug 3, 2020
@googlebot googlebot added the cla: yes Indicates the PR's author has signed the CLA. label Aug 3, 2020
@knative-prow-robot
Copy link
Contributor

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@knative-prow-robot knative-prow-robot added approved Indicates a PR has been approved by an approver from all required OWNERS files. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Aug 3, 2020
@duglin
Copy link
Contributor

duglin commented Aug 3, 2020

@rhuss what's the diff between this and kn service create --force? They both seem to want to update an existing service with new metadata

@kwiesmueller
Copy link

Hey there,
this came up in a GitHub search for Server-Side Apply.
I see you are using the kubectl.kubernetes.io/last-applied-configuration annotation which is provided by client-side apply.
We are working on Server-Side Apply which might resolve some of the issues i read in the description above.
If you need any help or got questions around apply, feel free to get in touch with us (wg-api-expression) on Slack or ping me here.
https://kubernetes.slack.com/archives/C0123CNN8F3

@wslyln
Copy link
Contributor

wslyln commented Aug 6, 2020

@duglin 'kn service create --force' will completely override the existing service resource. (akin to git push --force) while 'kn service apply' will do a three way merge of the configuration (aka git merge).

@rhuss
Copy link
Contributor Author

rhuss commented Aug 20, 2020

@kwiesmueller Thanks for the offer ! I'm just back from PTO and will continue on this issue soon.

@knative-prow-robot knative-prow-robot added needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Sep 9, 2020
@googlebot
Copy link

All (the pull request submitter and all commit authors) CLAs are signed, but one or more commits were authored or co-authored by someone other than the pull request submitter.

We need to confirm that all authors are ok with their commits being contributed to this project. Please have them confirm that by leaving a comment that contains only @googlebot I consent. in this pull request.

Note to project maintainer: There may be cases where the author cannot leave a comment, or the comment is not properly detected as consent. In those cases, you can manually confirm consent of the commit author(s), and set the cla label to yes (if enabled on your project).

ℹ️ Googlers: Go here for more info.

@googlebot googlebot added cla: no Indicates the PR's author has not signed the CLA. and removed cla: yes Indicates the PR's author has signed the CLA. labels Sep 9, 2020
@googlebot
Copy link

CLAs look good, thanks!

ℹ️ Googlers: Go here for more info.

@googlebot googlebot added cla: yes Indicates the PR's author has signed the CLA. and removed cla: no Indicates the PR's author has not signed the CLA. labels Sep 9, 2020
@knative-prow-robot knative-prow-robot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. labels Sep 9, 2020
@rhuss
Copy link
Contributor Author

rhuss commented Sep 9, 2020

@kwiesmueller Finally I got around to continue on working on this PR. I am still trying to get client side patching to work, before switching to server side, but there are still some stumbling block with defaulting, serializing and also Knative backend handling. At the moment I'm a bit stuck to understand the following error message:

✦ at 20:00 ❯ kn service apply random --revision-name "" --image rhuss/random:2.0
Error: patch:
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"serving.knative.dev/v1","kind":"Service","metadata":{"name":"random","namespace":"default"},"spec":{"template":{"metadata":{"annotations":{"client.knative.dev/user-image":"rhuss/random:2.0"}},"spec":{"containers":[{"image":"rhuss/random:2.0","name":"user-container"}]}}}}
spec:
  template:
    metadata:
      annotations:
        client.knative.dev/user-image: rhuss/random:2.0
    spec:
      $setElementOrder/containers:
      - name: user-container
      containers:
      - image: rhuss/random:2.0
        name: user-container

conflicts with changes made from original to current:
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"serving.knative.dev/v1","kind":"Service","metadata":{"name":"random","namespace":"default"},"spec":{"template":{"metadata":{"annotations":{"client.knative.dev/user-image":"rhuss/random:1.0"}},"spec":{"containers":[{"image":"rhuss/random:1.0","name":"user-container"}]}}}}
    serving.knative.dev/creator: minikube-user
    serving.knative.dev/lastModifier: minikube-user
  generation: 1
  managedFields:
  - apiVersion: serving.knative.dev/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
      f:spec:
        .: {}
        f:template:
          .: {}
          f:metadata:
            .: {}
            f:annotations:
              .: {}
              f:client.knative.dev/user-image: {}
            f:creationTimestamp: {}
          f:spec:
            .: {}
            f:containers: {}
    manager: kn
    operation: Update
    time: "2020-09-09T18:00:46Z"
  - apiVersion: serving.knative.dev/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        .: {}
        f:address:
          .: {}
          f:url: {}
        f:conditions: {}
        f:latestCreatedRevisionName: {}
        f:latestReadyRevisionName: {}
        f:observedGeneration: {}
        f:traffic: {}
        f:url: {}
    manager: controller
    operation: Update
    time: "2020-09-09T18:00:54Z"
  resourceVersion: "1473596"
  selfLink: /apis/serving.knative.dev/v1/namespaces/default/services/random
  uid: a37d3619-302e-4a5f-98c5-c4889faf6654
spec:
  template:
    spec:
      containerConcurrency: 0
      containers:
      - name: user-container
        readinessProbe:
          successThreshold: 1
          tcpSocket:
            port: 0
      timeoutSeconds: 300
  traffic:
  - latestRevision: true
    percent: 100

@kwiesmueller can you see, where the actual conflict is ? I though I fixed everything, but can't really see what is still conflicting.

@kwiesmueller
Copy link

If you're using SSA then the conflict error should give you the exact fields you get a conflict with.

Did I read correctly, that it's not SSA yet?
If not, I'm not yet sure what the conflict could be. Will take another look once I have time tomorrow.

@kwiesmueller
Copy link

It could be the CSA annotation, we should have a fix for that in the latest release.

@rhuss
Copy link
Contributor Author

rhuss commented Sep 9, 2020

Did I read correctly, that it's not SSA yet?

true, it's a true client side patch for now. But happy to explore SSA, too (thought client-side patch was easier to start with)

@rhuss
Copy link
Contributor Author

rhuss commented Sep 14, 2020

@kwiesmueller do you have a good entrypoint/sample showing how to use SSA so that I can find out what's going on ? Currently I'm a bit lost.

@rhuss
Copy link
Contributor Author

rhuss commented Sep 14, 2020

/retest

@knative-metrics-robot
Copy link

The following is the coverage report on the affected files.
Say /test pull-knative-client-go-coverage to re-run this coverage report

File Old Coverage New Coverage Delta
pkg/kn/commands/service/apply.go Do not exist 84.4%
pkg/kn/commands/service/create.go 84.1% 82.6% -1.5
pkg/kn/commands/service/service.go 86.4% 91.3% 4.9
pkg/serving/v1/apply.go Do not exist 80.5%
pkg/serving/v1/client.go 66.8% 68.0% 1.2
pkg/serving/v1/client_mock.go 93.3% 93.7% 0.3

@google-cla
Copy link

google-cla bot commented Oct 27, 2020

CLAs look good, thanks!

ℹ️ Googlers: Go here for more info.

@knative-metrics-robot
Copy link

The following is the coverage report on the affected files.
Say /test pull-knative-client-go-coverage to re-run this coverage report

File Old Coverage New Coverage Delta
pkg/kn/commands/service/apply.go Do not exist 84.4%
pkg/kn/commands/service/create.go 84.1% 82.6% -1.5
pkg/kn/commands/service/service.go 86.4% 91.3% 4.9
pkg/serving/v1/apply.go Do not exist 80.5%
pkg/serving/v1/client.go 66.8% 68.0% 1.2
pkg/serving/v1/client_mock.go 93.3% 93.7% 0.3

Copy link
Contributor

@maximilien maximilien left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for addressing feedback. LGTM.

@navidshaikh feel free to do your review and merge

@google-cla
Copy link

google-cla bot commented Oct 30, 2020

CLAs look good, thanks!

ℹ️ Googlers: Go here for more info.

@knative-metrics-robot
Copy link

The following is the coverage report on the affected files.
Say /test pull-knative-client-go-coverage to re-run this coverage report

File Old Coverage New Coverage Delta
pkg/kn/commands/service/apply.go Do not exist 84.4%
pkg/kn/commands/service/create.go 84.1% 82.6% -1.5
pkg/kn/commands/service/service.go 86.4% 91.3% 4.9
pkg/serving/v1/apply.go Do not exist 80.5%
pkg/serving/v1/client.go 66.8% 68.0% 1.2
pkg/serving/v1/client_mock.go 93.3% 93.7% 0.3

@rhuss
Copy link
Contributor Author

rhuss commented Oct 30, 2020

@navidshaikh I think we are good to merge if you don't have any objections.

@knative-prow-robot knative-prow-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Oct 30, 2020
@google-cla
Copy link

google-cla bot commented Oct 30, 2020

CLAs look good, thanks!

ℹ️ Googlers: Go here for more info.

@knative-metrics-robot
Copy link

The following is the coverage report on the affected files.
Say /test pull-knative-client-go-coverage to re-run this coverage report

File Old Coverage New Coverage Delta
pkg/kn/commands/service/apply.go Do not exist 84.4%
pkg/kn/commands/service/create.go 84.1% 82.6% -1.5
pkg/kn/commands/service/service.go 86.4% 91.3% 4.9
pkg/serving/v1/apply.go Do not exist 80.5%
pkg/serving/v1/client.go 66.8% 68.0% 1.2
pkg/serving/v1/client_mock.go 93.3% 93.7% 0.3

This commit introduces a client-side apply with a plain JsonPatchMerge. This is more limited than a StrategicPatchMerg as it does not allow to merge lists (they are just overwritten). Also is not a real 3-way merger that would lead to a conflict when both the, the server-side and the provide update overlapp in fields that updated, compared to the shared original configuration. This is a problem of  JsonThreeWayMerger itself, as pointed out in kubernetes/kubernetes#40666 (review).

This limitation is shared with kubectl, which suffers from the same issue if using `kubectl apply` with a custom resource (i.e. with everything that has schema that is not registered within kubectl).

Tests are missing, too, but will come soon
* More tests
* Example for kn service apply
* Remove commented-out code
@knative-prow-robot knative-prow-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Oct 31, 2020
@google-cla
Copy link

google-cla bot commented Oct 31, 2020

CLAs look good, thanks!

ℹ️ Googlers: Go here for more info.

@knative-metrics-robot
Copy link

The following is the coverage report on the affected files.
Say /test pull-knative-client-go-coverage to re-run this coverage report

File Old Coverage New Coverage Delta
pkg/kn/commands/service/apply.go Do not exist 84.4%
pkg/kn/commands/service/create.go 84.1% 82.6% -1.5
pkg/kn/commands/service/service.go 86.4% 91.3% 4.9
pkg/serving/v1/apply.go Do not exist 80.5%
pkg/serving/v1/client.go 66.8% 68.0% 1.2
pkg/serving/v1/client_mock.go 93.3% 93.7% 0.3

Copy link
Collaborator

@navidshaikh navidshaikh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

thank you!
I've added a few questions and a few nits suggestions which can be addressed in a subsequent PR along with CHANGELOG.

var waitFlags commands.WaitFlags

serviceApplyCommand := &cobra.Command{
Use: "apply NAME",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NAME could be derived from the file as well, not sure what could be the best way to represent multiple options apply could work with. The examples better represent the use of the command in this case.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

true, we should think about a common scheme here. (e.g using [ ] to describe optional positional arguments)

// can't be merged. Ideally a strategicpatch merge should be used, which allows a more fine grained
// way for performing the merge (but this is not supported for custom resources)
// See issue https://github.com/knative/client/issues/1073 for more details how this method should be
// improved for a better merge strategy.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

license header need to be swapped with imports

return nil, err
}

if annotate {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will there be a case when we'd want to control whether to annotate the service?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, there is. IIUR when you do the diff, then you don't want to have that annotation present. I would have to look a bit closer again, but there is a good reason. I also more or less took over the way how kubectl does the client side apply.

if len(containers.([]interface{})) == 0 {
return nil
}
return containers.([]interface{})[0].(map[string]interface{})
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure about the position of the user-container in the array, is it always at 0 index (thinking about multi-container)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the moment we are assuming only single, container, that's true. Maybe we should iterate over the list and just pick the container with name user-container ?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this only remove the name of the user-container? or for all the containers in array? If its later, we can simply range over the array and remove the names. I need to catch up on the multi container support to understand this better.

@@ -68,6 +69,17 @@ type KnServingClient interface {
// place.
UpdateServiceWithRetry(name string, updateFunc ServiceUpdateFunc, nrRetries int) error

// Apply a service's definition to the cluster. The full service declaration needs to be provided,
// which is different to UpdateService which can also do a partial update. If the given
// service does not already exists (identified by name) then the service is create.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// service does not already exists (identified by name) then the service is create.
// service does not already exists (identified by name) then the service is created.

// which is different to UpdateService which can also do a partial update. If the given
// service does not already exists (identified by name) then the service is create.
// If the service exists, then a three-way merge will be performed between the original
// configuration given (from the last "apply" operation), the new configuration as given ]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// configuration given (from the last "apply" operation), the new configuration as given ]
// configuration given (from the last "apply" operation), the new configuration as given

"knative.dev/client/pkg/util"
)

func TestServiceApply(t *testing.T) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can also add another one e2e for apply with file

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@knative-prow-robot knative-prow-robot added the lgtm Indicates that a PR is ready to be merged. label Nov 2, 2020
@knative-prow-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: maximilien, navidshaikh, rhuss

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [maximilien,navidshaikh,rhuss]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@knative-prow-robot knative-prow-robot merged commit 8ca97c7 into knative:master Nov 2, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cla: yes Indicates the PR's author has signed the CLA. lgtm Indicates that a PR is ready to be merged. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants