New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
openapi: reference shared parameters #118204
openapi: reference shared parameters #118204
Conversation
This looks great, some overall questions:
|
Yeah, ideally I think that should be done much earlier in the process if possible, since this is looking at all paths and marshalling the parameters to see how they are, I'm concerned this is going to be super expensive to run on every single aggregation.
I'm pretty sure Sean's client is going to fail unfortunately, and I suspect there might be others that would fail too. |
It's valid OpenAPI. If they fail, they should fix it. Nothing should depend on our concrete shape of the spec. |
I looked into the builder. Doing that in the builder is much trickier. The builder might differ between aggregated apiserver and kube-apiserver. And we would have to know the set of shared parameters very early, maybe statically via generation. It's certainly possible to push it earlier in generation, but it's a lot more complicated. Am not sure about the value vs. effort. |
I don't disagree, it shouldn't stop us from doing the right thing, but we need to consider it. |
Also, if we support shared parameters from aggregation (today we don't, we just drop those and leave dangling references), we need unification, i.e. something similar we do for the definitions today. An aggregated apiserver can have its own ObjectMeta schema, and the code figures it out to have another definition for that special ObjectMeta. That will add another cost to the merging. |
@Jefftree Do we have anything useful to benchmark the aggregation loop? I think I remember you had mentioned something like this at some point? |
Right, we don't exactly want to be breaking clients using a GA feature so we'll have to carefully how we want to move forward.
I had some basic code to run our OpenAPI tests in benchmark mode. Unfortunately it had some pretty high variance because of the setup overhead all the other stuff going on in the background. I do really want to come up with a good way of benchmarking, but at the moment pprof is still the most reliable way. @sttts out of curiosity, how did you perform the benchmarks for this PR? |
Have installed a few hundred CRDs and curl'ed |
Yeah, I was looking at it too and it's clear that there's A LOT of parameters being repeated. |
ack, for the purposes of this PR, running |
/triage accepted |
pushed. |
Rebased to test client generators and compare this PR against master. No difference in output with simple parameter expansion in #118204. |
In kubernetes/kube-openapi#415, I exclude the I am worried that this PR has been green. The apply code lacks test coverage it seems. |
/retest |
staging/src/k8s.io/cli-runtime/pkg/resource/query_param_verifier.go
Outdated
Show resolved
Hide resolved
staging/src/k8s.io/cli-runtime/pkg/resource/query_param_verifier_v3.go
Outdated
Show resolved
Hide resolved
4fc4f97
to
e7be841
Compare
/lgtm |
LGTM label has been added. Git tree hash: e83dde7a96b5e4565f04052e737708e66e936246
|
Reducing the v2 spec size by 55%.
Depends on kubernetes/kube-openapi#411.
The OpenAPI diff: https://gist.github.com/sttts/97df5c47d2a68084cf9f65d09f9f59f6