Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use dynamic client for applying channels manifest rather than calling kubectl #13753

Merged
merged 2 commits into from
Jul 8, 2022

Conversation

olemarkus
Copy link
Member

The aim of this PR is to remove the dependency of channels on the kubectl binary. This makes k8s version a bit more consistent, and makes it easier to run clientside.

The approach is more or less a copy/paste of the pruner. Refactoring opportunities there.

@justinsb appreciate your eyes on this one. There are probably lots of things to think about when using the dynamic client like this.

@olemarkus olemarkus requested a review from justinsb June 8, 2022 18:49
@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Jun 8, 2022
@k8s-ci-robot k8s-ci-robot requested a review from zetaab June 8, 2022 18:50
@olemarkus
Copy link
Member Author

/kind office-hours

@hakman
Copy link
Member

hakman commented Jun 9, 2022

Just FYI, current approach (since #13731) uses kubectl replace and causes some issues also:
https://storage.googleapis.com/kubernetes-jenkins/logs/e2e-kops-aws-conformance-1-24/1534818290919018496/artifacts/i-0b53ec288383f99fa/protokube.log

I0609 11:07:32.775549   96689 s3fs.go:329] Reading file "s3://k8s-kops-prow/e2e-e2e-kops-aws-conformance-1-24.test-cncf-aws.k8s.io/addons/core.addons.k8s.io/v1.4.0.yaml"
I0609 11:07:32.959358   96689 apply.go:60] Running command: kubectl replace -f /tmp/channel2405827354/manifest.yaml --force --field-manager=kops
I0609 11:07:33.102919   96689 apply.go:63] error running kubectl replace -f /tmp/channel2405827354/manifest.yaml --force --field-manager=kops
I0609 11:07:33.102938   96689 apply.go:64] Error from server (Forbidden): error when deleting "/tmp/channel2405827354/manifest.yaml": namespaces "kube-system" is forbidden: this namespace may not be deleted
error updating "core.addons.k8s.io": error applying update from "s3://k8s-kops-prow/e2e-e2e-kops-aws-conformance-1-24.test-cncf-aws.k8s.io/addons/core.addons.k8s.io/v1.4.0.yaml": error running kubectl: exit status 1

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 10, 2022
@justinsb justinsb added this to the v1.25 milestone Jul 1, 2022
@justinsb justinsb self-assigned this Jul 1, 2022
}
kind := object.Kind()
if kind == "" {
return fmt.Errorf("failed to find kind in object")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: we might want to log more object info here (maybe %v of object), although I suspect this "won't happen" (tm)

)

// Apply calls kubectl apply to apply the manifest.
// We will likely in future change this to create things directly (or more likely embed this logic into kubectl itself)
func Apply(data []byte) error {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if we should try to make this pluggable during the transition? cf what we did here https://github.com/kubernetes-sigs/kubebuilder-declarative-pattern/blob/master/pkg/patterns/declarative/pkg/applier/direct.go ...

I do like this applier better than reusing kubectl, as that is giving us trouble e.g. https://github.com/kubernetes-sigs/kubebuilder-declarative-pattern/pull/225/files# ...

_, err = execKubectl("replace", "-f", localManifestFile, "--force", "--field-manager=kops")
return err
for gk := range objectsByKind {
if err := p.applyObjectsOfKind(ctx, gk, objectsByKind[gk]); err != nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There might be interdependencies here (the classic is namespace before objects in that namespace); there are two possible strategies ... we could pre-sort, we could iterate a few times. In practice because we're going to retry this anyway, I don't think it matters, just one to note for the future!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good point. The manifests are applied in order though, and e.g CRD and Namespace objects tend to be placed at the top. But agree this is for the future.

return err
for gk := range objectsByKind {
if err := p.applyObjectsOfKind(ctx, gk, objectsByKind[gk]); err != nil {
return fmt.Errorf("failed to apply objects of kind %s: %w", gk, err)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We may want to keep on going (i.e. accumulate errors) in case of dependencies

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree. This I easily fix now.

func (p *Applier) applyObjectsOfKind(ctx context.Context, gk schema.GroupKind, expectedObjects []*kubemanifest.Object) error {
klog.V(2).Infof("applying objects of kind: %v", gk)

restMapping, err := p.RESTMapper.RESTMapping(gk)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: I think we should apply with the correct version, i.e. group by GVK.

output, err := cmd.CombinedOutput()
baseResource := p.Client.Resource(gvr)

actualObjects, err := baseResource.List(ctx, v1.ListOptions{})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we have to read the existing objects, we might want to scope by namespace, just for memory etc in big clusters. We may not have to do this at all with server-side-apply, TBD maybe

key := namespace + "/" + name

var resource dynamic.ResourceInterface
if namespace != "" {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: If we pass in the restMapping, we can know if this object is namespaced or not (and raise an appropriate error/warning or default the namespace, whatever we want to do)


obj := expectedObjects.ToUnstructured()

if actual, found := actualMap[key]; found {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suspect we should be using server-side-apply here. It's a little easier in some ways (it's a Patch, no need to worry about whether or not the object exists), although we've seen some of the field manager challenges during the transition.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have been thinking of doing a regular patch here. But a strategic patch would be problematic for the same reason kubectl apply is.

With a regular patch, we infer what we should overwrite vs keep by looking at the managed fields too.

Copy link
Member

@justinsb justinsb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is awesome and I would love to see us pursue this approach!

@justinsb
Copy link
Member

justinsb commented Jul 2, 2022

So I think this is great, and we should get this into 1.25. I'm happy to basically merge this as-is and we can iterate on it or alternatively if you want to incorporate some of the suggestions first (if you agree!) we can do it that way.

@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jul 2, 2022
@olemarkus
Copy link
Member Author

/test pull-kops-e2e-aws-upgrade-k120-kolatest-to-k121-kolatest

@hakman
Copy link
Member

hakman commented Jul 8, 2022

/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: hakman

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jul 8, 2022
@hakman
Copy link
Member

hakman commented Jul 8, 2022

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jul 8, 2022
@k8s-ci-robot k8s-ci-robot merged commit 65fc3e8 into kubernetes:master Jul 8, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/channels cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/office-hours lgtm "Looks good to me", indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants