-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use dynamic client for applying channels manifest rather than calling kubectl #13753
Conversation
/kind office-hours |
Just FYI, current approach (since #13731) uses
|
} | ||
kind := object.Kind() | ||
if kind == "" { | ||
return fmt.Errorf("failed to find kind in object") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: we might want to log more object info here (maybe %v of object), although I suspect this "won't happen" (tm)
) | ||
|
||
// Apply calls kubectl apply to apply the manifest. | ||
// We will likely in future change this to create things directly (or more likely embed this logic into kubectl itself) | ||
func Apply(data []byte) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if we should try to make this pluggable during the transition? cf what we did here https://github.com/kubernetes-sigs/kubebuilder-declarative-pattern/blob/master/pkg/patterns/declarative/pkg/applier/direct.go ...
I do like this applier better than reusing kubectl, as that is giving us trouble e.g. https://github.com/kubernetes-sigs/kubebuilder-declarative-pattern/pull/225/files# ...
_, err = execKubectl("replace", "-f", localManifestFile, "--force", "--field-manager=kops") | ||
return err | ||
for gk := range objectsByKind { | ||
if err := p.applyObjectsOfKind(ctx, gk, objectsByKind[gk]); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There might be interdependencies here (the classic is namespace before objects in that namespace); there are two possible strategies ... we could pre-sort, we could iterate a few times. In practice because we're going to retry this anyway, I don't think it matters, just one to note for the future!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good point. The manifests are applied in order though, and e.g CRD and Namespace objects tend to be placed at the top. But agree this is for the future.
return err | ||
for gk := range objectsByKind { | ||
if err := p.applyObjectsOfKind(ctx, gk, objectsByKind[gk]); err != nil { | ||
return fmt.Errorf("failed to apply objects of kind %s: %w", gk, err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We may want to keep on going (i.e. accumulate errors) in case of dependencies
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree. This I easily fix now.
func (p *Applier) applyObjectsOfKind(ctx context.Context, gk schema.GroupKind, expectedObjects []*kubemanifest.Object) error { | ||
klog.V(2).Infof("applying objects of kind: %v", gk) | ||
|
||
restMapping, err := p.RESTMapper.RESTMapping(gk) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: I think we should apply with the correct version, i.e. group by GVK.
output, err := cmd.CombinedOutput() | ||
baseResource := p.Client.Resource(gvr) | ||
|
||
actualObjects, err := baseResource.List(ctx, v1.ListOptions{}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we have to read the existing objects, we might want to scope by namespace, just for memory etc in big clusters. We may not have to do this at all with server-side-apply, TBD maybe
key := namespace + "/" + name | ||
|
||
var resource dynamic.ResourceInterface | ||
if namespace != "" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: If we pass in the restMapping, we can know if this object is namespaced or not (and raise an appropriate error/warning or default the namespace, whatever we want to do)
|
||
obj := expectedObjects.ToUnstructured() | ||
|
||
if actual, found := actualMap[key]; found { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suspect we should be using server-side-apply here. It's a little easier in some ways (it's a Patch, no need to worry about whether or not the object exists), although we've seen some of the field manager challenges during the transition.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have been thinking of doing a regular patch here. But a strategic patch would be problematic for the same reason kubectl apply
is.
With a regular patch, we infer what we should overwrite vs keep by looking at the managed fields too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is awesome and I would love to see us pursue this approach!
So I think this is great, and we should get this into 1.25. I'm happy to basically merge this as-is and we can iterate on it or alternatively if you want to incorporate some of the suggestions first (if you agree!) we can do it that way. |
75fac12
to
389d7c1
Compare
/test pull-kops-e2e-aws-upgrade-k120-kolatest-to-k121-kolatest |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: hakman The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/lgtm |
The aim of this PR is to remove the dependency of channels on the kubectl binary. This makes k8s version a bit more consistent, and makes it easier to run clientside.
The approach is more or less a copy/paste of the pruner. Refactoring opportunities there.
@justinsb appreciate your eyes on this one. There are probably lots of things to think about when using the dynamic client like this.