-
Notifications
You must be signed in to change notification settings - Fork 1.2k
🐛 Allow SSA after normal resource creation #3346
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐛 Allow SSA after normal resource creation #3346
Conversation
Welcome @filipcirtog! |
Hi @filipcirtog. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/ok-to-test |
pkg/client/client_rest_resources.go
Outdated
c.mu.RUnlock() | ||
|
||
if known { | ||
if known && !forceDisableProtoBuf { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This means that we will create a new client for every SSA requests. Can we avoid that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've noticed the same thing and am currently looking into it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@alvaroaleman we could also add this to the Apply req. SetHeader("Accept", "application/json").
WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That will override any user-configured preference and is something that needs maintenace. Upstream is for example working on cbor which is more efficient than json. Can we instead just key the cached clients by both known and forceDisableProtobuf?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I agree that this is the best solution. While forceDisableProtoBuf is always true for unstructuredResource, I also made the changes for consistency. Please let me know if you have any additional feedback or suggestions to improve this further.
pkg/client/client_rest_resources.go
Outdated
// unstructuredResourceByType stores unstructured type metadata | ||
unstructuredResourceByType map[schema.GroupVersionKind]*resourceMeta | ||
mu sync.RWMutex | ||
unstructuredResourceByType map[cacheKey]*resourceMeta |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From what I can tell the only reason for having these two was that unstructured had the same problem of never being able to use proto - Would you mind to de-duplicate them into a single resourceByType
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's right. I have addressed this. Thank you!
pkg/client/client_test.go
Outdated
TypeMeta: metav1.TypeMeta{ | ||
Kind: "Secret", | ||
APIVersion: "v1", | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't needed and can be removed
TypeMeta: metav1.TypeMeta{ | |
Kind: "Secret", | |
APIVersion: "v1", | |
}, |
pkg/client/client_test.go
Outdated
err = cl.Apply(ctx, secretApplyConfiguration, &client.ApplyOptions{FieldManager: "test-manager"}) | ||
Expect(err).NotTo(HaveOccurred()) | ||
|
||
cm, err = clientset.CoreV1().Secrets(ptr.Deref(secretApplyConfiguration.GetNamespace(), "")).Get(ctx, ptr.Deref(secretApplyConfiguration.GetName(), ""), metav1.GetOptions{}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit
cm, err = clientset.CoreV1().Secrets(ptr.Deref(secretApplyConfiguration.GetNamespace(), "")).Get(ctx, ptr.Deref(secretApplyConfiguration.GetName(), ""), metav1.GetOptions{}) | |
secret, err = clientset.CoreV1().Secrets(ptr.Deref(secretApplyConfiguration.GetNamespace(), "")).Get(ctx, ptr.Deref(secretApplyConfiguration.GetName(), ""), metav1.GetOptions{}) |
/cherrypick release-0.22 |
@alvaroaleman: once the present PR merges, I will cherry-pick it on top of In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
LGTM label has been added. Git tree hash: 7701c3b0875d44c496bc71f0b70dbd1d4837e36d
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: alvaroaleman, filipcirtog The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@alvaroaleman: new pull request created: #3348 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/lgtm
How much of a problem do we think this is? (I don't really see the problem) |
@sbueringer It was this comment that triggered me to have that conclusion. However, thanks to Alvaro's feedback, I've updated the implementation to include a cache key that combines both the actual GVK (GroupVersionKind) and the forceDisableProtoBuf flag. This ensures that we now support up to two distinct clients for each GVK: one with forceDisableProtoBuf enabled and another without it. This approach is an improvement over my initial PR, which created a new client for forceDisableProtoBuf each time. |
Got it, thx for the quick response. Let's please update the PR description |
Done. Thank you. Please let me know if there is anything more to change. |
All good, thank you! |
Addressing issue #3344
Issue:
object %v does not implement the protobuf marshalling interface and cannot be encoded to a *client.applyconfigurationRuntimeObject protobuf message.
Root Cause:
Changes Made: