-
Notifications
You must be signed in to change notification settings - Fork 38.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Convert to use versioned api #36673
Convert to use versioned api #36673
Conversation
8df32c2
to
08562cb
Compare
// TODO: remove this duplicate | ||
// InternalExtractContainerResourceValue extracts the value of a resource | ||
// in an already known container | ||
func InternalExtractContainerResourceValue(fs *api.ResourceFieldSelector, container *api.Container) (string, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ExtractContainerResourceValue method shouldn't even exist in this package - really surprised it got cloned into here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you want me to move both functions to another package?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That would be best.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think they belong to the pkg/api/util/resources
to be created in #30152. I'll leave a TODO and comment on that PR.
bc6e87e
to
9d0c855
Compare
@@ -974,6 +980,40 @@ func Convert_api_ContainerStatus_To_v1_ContainerStatus(in *api.ContainerStatus, | |||
return autoConvert_api_ContainerStatus_To_v1_ContainerStatus(in, out, s) | |||
} | |||
|
|||
func autoConvert_v1_ConversionError_To_api_ConversionError(in *ConversionError, out *api.ConversionError, s conversion.Scope) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wouldn't expect this type to be converted (or deep copied) since it's not for external use.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, not sure how this change slip in. I guess the conversion generator generates for all types in pkg/api/v1.
@@ -149,7 +149,7 @@ var _ = framework.KubeDescribe("NodeProblemDetector", func() { | |||
"involvedObject.name": node.Name, | |||
"involvedObject.namespace": v1.NamespaceAll, | |||
"source": source, | |||
}.AsSelector() | |||
}.AsSelector().String() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not having any place to store the compiled field selector in v1 makes this uglier than it should be. We don't need to change it here, but I hit this when moving ListOptions to meta.k8s.io/v1, and felt that it should easier / possible to deal in v1 without this. Not sure what the best approach is though.
Looked at all the manual commits, and they look ok (ugly to copy some of them, but not against them). Looking through the generated code now. |
extensions "k8s.io/kubernetes/pkg/client/clientset_generated/release_1_5/typed/extensions/v1beta1" | ||
) | ||
|
||
func versionedCliensetForDeployment(internalClient internalclientset.Interface) externalclientset.Interface { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo
@@ -659,6 +661,24 @@ func TestSimpleStop(t *testing.T) { | |||
} | |||
} | |||
|
|||
func getPodTemplateSpecHash(template api.PodTemplateSpec) uint32 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it might just be cleaner to put this deploymentutil as GetPodTemplateSpecHashV1(...)
which keeps this logic together. It's not as if we're avoiding having to import deploymentutil.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(same for the other deployment util methods like this).
@@ -31,16 +32,16 @@ import ( | |||
type ListerWatcher interface { | |||
// List should return a list type object; the Items field will be extracted, and the | |||
// ResourceVersion field will be used to start the watch in the right place. | |||
List(options api.ListOptions) (runtime.Object, error) | |||
List(options v1.ListOptions) (runtime.Object, error) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this is correct. It should stay internal version.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it stays internal version, then we'll need to convert the ListOptions to v1 before passing it to the client's List/Watch methods. Is using v1 wrong?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Basically all of the discussions boil down to:
Our internal code mode only works if the same object can be represented in all versions with no loss of data. By definition, we try to avoid losing data. When we do have that potential, we create new objects (ReplicaSet vs ReplicationController) and don't use object versioning.
So v1 is no worse than the internal version, except that right now we're introducing a change from internal to versioned that ripples out into other code before we're ready for it.
I'd rather preserve the internal api.ListOptions for now until we solve the label selector / field selector string representation in v1 (which causes other code changes) and do the conversion inside the list watcher impl. The v1
here shouldn't be core API v1 - it should be meta.k8s.io/v1
and we don't have that yet.
We can always come back to this particular change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
except that right now we're introducing a change from internal to versioned that ripples out into other code
IIUC, "ripples" refers to all the calls to selector.String()
? Keeping ListWatcher interface taking api.ListOptions will only reduce very limited amount of the ripples, because most ripples are not caused by it, but rather because the versioned clientset's List/Watch methods take v1.ListOptions. Only a few ListFunc/WatchFunc definitions refer to the selectors, like this one.
So if we really want to reduce the ripples, we need to revert #31994. Otherwise there's not point keeping this interface using api.ListOptions, it will just introduce calls to conversion functions in all ListFunc/WatchFunc definitions.
return v1.ResourceList{} | ||
} | ||
|
||
func PodUsageFunc(obj runtime.Object) api.ResourceList { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The outcome object is a lot harder to read. I'm kind of concerned that this change is just moving the complexity we have in conversion around in some other spot. It's possible these usage functions should only work on a single type each and we should register different ones for different internal types. Having two types in the same function seems prone to error and drift, and makes quota much more complex.
What other options do we have for these methods? @derekwaynecarr ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @vishh
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can create a v1,pod evaluator, that's easier to reason about, but it also duplicates much code, will be hard to maintain.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So changing admission to be versioned is out of scope for now (for lots of reasons).
I'm primarily concerned that the function is now much harder to read than it was before. Readability of this code is crucial. The maintenance of the switch is the problem. A better path may just to do:
func PodUsageFunc(obj runtime.Object) api.ResourceList {
switch t := obj.(type) {
case *api.Pod:
converted := &v1.Pod{}
if err := v1.Convert_api_Pod_to_v1_Pod(...); err != nil {
panic("impossible conversion")
}
obj = converted
}
// rest of method, v1 adapted
...
}
Eating errors here is terrifying because this is a security / control code path. That would be simplest and ensures that the code is exactly the same. Admission can be slightly slower than it is today.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually we need the conversion the other way around (v1 to api), because all the ResourceList logic is defined on api.ResourceList. So it's the efficiency of the resoucequota controller that gets affected. Is that acceptable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Catching up here, I refactored evaluator some in #34554 and so i am trying to mentally adjust some... I think if I understand Clayton's comment, in admission, we will always convert to v1, but in the controller side, there would be no conversion, but the core evaluators would then just forever be v1 based.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, after looking at the code change, I prefer Clayton's recommendation in pseudo code above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you want to migrate all the ResoureList utility functions to v1? If not, the conversion will be the other way around.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or the other way around (v1 -> internal). I'm fine with either, my recommendation was mostly based on the fact that admission does update less often (so internal) than the controllers recalculate usage, so avoiding some incremental effort. I'd say whatever is less code and complexity for the rest of the change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the fact that admission does update less often (so internal) than the controllers recalculate usage,
Yeah, I thought of that, too. Then I thought doing the conversion for resourcequota controller is no worse than today, so I found my peace.
@@ -115,7 +116,11 @@ func NewResourceQuotaController(options *ResourceQuotaControllerOptions) *Resour | |||
// responsible for enqueue of all resource quotas when doing a full resync (enqueueAll) | |||
oldResourceQuota := old.(*v1.ResourceQuota) | |||
curResourceQuota := cur.(*v1.ResourceQuota) | |||
if quota.Equals(curResourceQuota.Spec.Hard, oldResourceQuota.Spec.Hard) { | |||
internalOld := &api.ResourceQuota{} | |||
v1.Convert_v1_ResourceQuota_To_api_ResourceQuota(oldResourceQuota, internalOld, nil) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not checking errors on these is scary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can duplicate the Equals function. pkg/quota and pkg/controller/deployment/util are the two most tricky packages in this PR.
Looking through this code:
Do we need kubectl as part of this change? What can we do to make deployment and quota easier to reason about? EDIT: with a few more thoughts |
If we get rid of internal versions, some objects are more painful to deal with unless we have "not serialized" fields (which makes the objects more complicated). Examples are init containers, label and field selectors on ListOptions. For RawExtension we solved that just by having two fields, one that is set in preference to the other. Not sure if that's something we should be doing here. |
Yes. If you want to tie to particular internal interfaces, vendor and pay the rebase cost. We can't get to the multiple repo model without causing some internal churn. |
We can't succeed in the multi-repo model without everyone doing their bit. I think it's reasonable to ask people to open an issue in the other repos, if you're making a change that requires you to go through the kubernetes/kubernetes codebase and fix any issues you find. It's all kubernetes, even if we shard because github. |
isn't that every single PR? |
We're being very literal today ;-) I guess the point is if you have to go through the repo and looks for usages that have been silently broken (i.e. the compiler isn't making them obvious), and you find them, that "must" be notified to sharded repos. I think if the compiler flags them and it's an easy fix we can let it slide. But if it's a non-trivial change (either in complexity or volume), it's good manners to compile some notes for the external repos, so that they can make the required changes in a time-efficient manner, and posting an issue seems like the most efficient way to communicate that. And yes, bring on the shared repos with tighter API guarantees so this ceases to be a problem. |
A common scenario is that we find a class of bug in-tree, and sweep for and fix it. A PSA email to kubernetes-dev would be a good idea in cases like that, but I don't think spawning issues to all sub-repos is scalable for every bug that requires a sweep |
Yes, exactly what we're discussing here. So a vote for option 3 :-) |
@justinsb The internal API has no stability guarantee, has been changed several times, and is "internal", so is not breaking. It really should only be used by apiserver, which is the state we're trying to get to. It was never supposed to be serialized, since the beginning. This can be traced back into 2014 from #3933. The tags were copied into api/types.go originally in order to make that file easier to diff with the latest version. It may not have been sufficiently documented, but we've tried to remove these tags multiple times. As for the broader issue, tools for searching multiple repos would be a good first step. Are you aware of any? |
https://github.com/kubernetes/community/blob/master/contributors/devel/api_changes.md does document that the versioned types should be used exclusively, and that breaking changes should be emailed to kubernetes-dev. That list has far lower traffic than github notifications. |
@@ -2313,8 +2313,9 @@ __EOF__ | |||
kubectl-with-retry rollout resume deployment nginx "${kube_flags[@]}" | |||
# The resumed deployment can now be rolled back | |||
kubectl rollout undo deployment nginx "${kube_flags[@]}" | |||
# Check that the new replica set (nginx-618515232) has all old revisions stored in an annotation | |||
kubectl get rs nginx-618515232 -o yaml | grep "deployment.kubernetes.io/revision-history: 1,3" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the change to use versioned api going to change hashes in all pre-existing deployments? There was value in this test in that we ensured that the hash wouldn't change between updates. As a matter of fact we need such a test and I created one in #40854.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the change to use versioned api going to change hashes in all pre-existing deployments?
Yes. I discussed with @janetkuo back then, IIRC, deployment controller shouldn't depend on the stable hash, because any changes to the API will alter the hash, like when we added ObjectMeta.OwnerReferences to the API.
Changes to the API (specifically the PodTemplateSpec) should be
backwards-compatible, usually by introducing the new field as a pointer.
Then, new API additions shouldn't result in new hashes.
…On Fri, Feb 3, 2017 at 7:52 AM, Chao Xu ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In hack/make-rules/test-cmd.sh
<#36673>:
> @@ -2313,8 +2313,9 @@ __EOF__
kubectl-with-retry rollout resume deployment nginx "${kube_flags[@]}"
# The resumed deployment can now be rolled back
kubectl rollout undo deployment nginx "${kube_flags[@]}"
- # Check that the new replica set (nginx-618515232) has all old revisions stored in an annotation
- kubectl get rs nginx-618515232 -o yaml | grep "deployment.kubernetes.io/revision-history: 1,3"
Is the change to use versioned api going to change hashes in all
pre-existing deployments?
Yes. I discussed with @janetkuo <https://github.com/janetkuo> back then,
IIRC, deployment controller shouldn't depend on the stable hash, because
any changes to the API will alter the hash, like when we added
ObjectMeta.OwnerReferences to the API.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#36673>, or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADuFf-eDgYgvroNxDtKZISySLtvWKas2ks5rYs7EgaJpZM4KwN0x>
.
|
That's not what a pointer field gets you. That makes to detect when it was previously unspecific (nil instead of trying to detect zero-values of structs) so you can default to appropriately. Defaulting a new, nil, pointer field can still result in a hash change |
Doesn't that depend on the DeepEqual implementation being used? For
example, I think this is not the case with semantic.DeepEqual, which is
what we use for comparing PodTemplates (though seems like the hash
generation is doing something different)
…On Sat, Feb 4, 2017 at 2:38 PM, Jordan Liggitt ***@***.***> wrote:
Changes to the API (specifically the PodTemplateSpec) should be
backwards-compatible, usually by introducing the new field as a pointer.
Then, new API additions shouldn't result in new hashes.
That's not what a pointer field gets you. That makes to detect when it was
previously unspecific (nil instead of trying to detect zero-values of
structs) so you can default to appropriately. Defaulting a new, nil,
pointer field can still result in a hash change
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#36673 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADuFf8nvOd4gr2Lbwf6dusZ5tKUcY6Uzks5rZH9sgaJpZM4KwN0x>
.
|
DeepEqual is orthogonal to defaulting |
Why doesn't deployment depend on generation?
On Feb 4, 2017, at 9:10 AM, Jordan Liggitt <notifications@github.com> wrote:
DeepEqual is orthogonal to defaulting
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#36673 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABG_pxBY16FYX6MU_JQngUVSIuSsN1cPks5rZIbogaJpZM4KwN0x>
.
|
What do you mean by generation? The deployment controller compares
deployment templates with the templates of every replica set they match and
if no match is found, the deployment template is hashed and a new replica
se tis created.
On Sat, Feb 4, 2017 at 7:55 PM, Clayton Coleman <notifications@github.com>
wrote:
… Why doesn't deployment depend on generation?
On Feb 4, 2017, at 9:10 AM, Jordan Liggitt ***@***.***>
wrote:
DeepEqual is orthogonal to defaulting
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<https://github.com/kubernetes/kubernetes/pull/
36673#issuecomment-277448210>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABG_pxBY16FYX6MU_
JQngUVSIuSsN1cPks5rZIbogaJpZM4KwN0x>
.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#36673 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADuFf66JklBPdnetW3oUI0ZHJ8eDAPbPks5rZMm9gaJpZM4KwN0x>
.
|
Sigh, nevermind. Bad day. How expensive does converting to YAML before hashing sound? It would eliminate hash updates between upgrades. |
I'm saying the problem with hashes is a calculation that is inherently
unstable and not resilient to changes in the resource. Generation is
stable. As we encounter more problems with hashes, we have to ask whether
the approach is long term stable. I wanted to ensure we continue to verify
our design path against the challenges we face, so reiterating why hashes
are superior to tracking generation is useful.
On Feb 4, 2017, at 6:48 PM, Michail Kargakis <notifications@github.com>
wrote:
What do you mean by generation? The deployment controller compares
deployment templates with the templates of every replica set they match and
if no match is found, the deployment template is hashed and a new replica
se tis created.
On Sat, Feb 4, 2017 at 7:55 PM, Clayton Coleman <notifications@github.com>
wrote:
Why doesn't deployment depend on generation?
On Feb 4, 2017, at 9:10 AM, Jordan Liggitt ***@***.***>
wrote:
DeepEqual is orthogonal to defaulting
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<https://github.com/kubernetes/kubernetes/pull/
36673#issuecomment-277448210>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABG_pxBY16FYX6MU_
JQngUVSIuSsN1cPks5rZIbogaJpZM4KwN0x>
.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<
or mute the thread
<
.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#36673 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABG_p0HMbAmorZ8Wu_THFftIBFztAXNiks5rZQ40gaJpZM4KwN0x>
.
|
Yaml isn't any more stable than the versioned objects
On Feb 5, 2017, at 6:00 PM, Michail Kargakis <notifications@github.com> wrote:
DeepEqual is orthogonal to defaulting
Sigh, nevermind. Bad day. How expensive does converting to YAML before
hashing sound? It would eliminate hash updates between upgrades.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#36673 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABG_p3zH75mwL4UqeHuvbrrnfIl-u4VAks5rZlSVgaJpZM4KwN0x>
.
|
Fair point. Generation doesn't map to versions (you can have the same
version of deployment have different generations) and versioning the name
will block adoption in the worst case, or create confusion at best because
we are moving replica sets around when we roll back. So I think there is
value in decoupling names from the version and hashing in theory sounds
ideal. That being said, hashing indeed comes with problems and we should
consider if these problems cannot be solved before we move forward with a
different approach.
Current problems:
* hash collisions
* the podtemplate api changes, the deployment template hash changes
The first problem is solved by switching from adler to fnv.
The second problem seems solvable by having a layer between the object and
the hashing function that will strip non-used fields from the input to the
hashing algorithm.
On Mon, Feb 6, 2017 at 12:04 AM, Clayton Coleman <notifications@github.com>
wrote:
… I'm saying the problem with hashes is a calculation that is inherently
unstable and not resilient to changes in the resource. Generation is
stable. As we encounter more problems with hashes, we have to ask whether
the approach is long term stable. I wanted to ensure we continue to verify
our design path against the challenges we face, so reiterating why hashes
are superior to tracking generation is useful.
On Feb 4, 2017, at 6:48 PM, Michail Kargakis ***@***.***>
wrote:
What do you mean by generation? The deployment controller compares
deployment templates with the templates of every replica set they match and
if no match is found, the deployment template is hashed and a new replica
se tis created.
On Sat, Feb 4, 2017 at 7:55 PM, Clayton Coleman ***@***.***>
wrote:
> Why doesn't deployment depend on generation?
>
> On Feb 4, 2017, at 9:10 AM, Jordan Liggitt ***@***.***>
> wrote:
>
> DeepEqual is orthogonal to defaulting
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/kubernetes/kubernetes/pull/
> 36673#issuecomment-277448210>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/ABG_pxBY16FYX6MU_
> JQngUVSIuSsN1cPks5rZIbogaJpZM4KwN0x>
> .
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <
#36673 (comment)
>,
> or mute the thread
> <
https://github.com/notifications/unsubscribe-auth/
ADuFf66JklBPdnetW3oUI0ZHJ8eDAPbPks5rZMm9gaJpZM4KwN0x
>
> .
>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<https://github.com/kubernetes/kubernetes/pull/
36673#issuecomment-277486670>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABG_p0HMbAmorZ8Wu_
THFftIBFztAXNiks5rZQ40gaJpZM4KwN0x>
.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#36673 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADuFf8b02oG0_qox75ES7IgthliwWkL1ks5rZlVogaJpZM4KwN0x>
.
|
With generation you would not get quick adoption of old controllers on
rollback unless we did something incredibly clever with label selectors on
hashes.
I.e.
deployment(gen:1) -> rs-1(label-selector: hash1)
And then if we changed hashes, we would simply add a new label selector to
all RCs under management by deployments:
deployment(gen:1) -> rs-1(label-selector: hash1 OR hash2)
But we'd have to do clever things with adoption (rs-1 and rs-3 could have
the same hashes) in order to hand off the pods (maybe by having the
deployment controller delete the old RS if a newer RS has its hashes, and
hope that the RS controller has the correct owner behavior).
On Sun, Feb 5, 2017 at 9:04 PM, Michail Kargakis <notifications@github.com>
wrote:
… Fair point. Generation doesn't map to versions (you can have the same
version of deployment have different generations) and versioning the name
will block adoption in the worst case, or create confusion at best because
we are moving replica sets around when we roll back. So I think there is
value in decoupling names from the version and hashing in theory sounds
ideal. That being said, hashing indeed comes with problems and we should
consider if these problems cannot be solved before we move forward with a
different approach.
Current problems:
* hash collisions
* the podtemplate api changes, the deployment template hash changes
The first problem is solved by switching from adler to fnv.
The second problem seems solvable by having a layer between the object and
the hashing function that will strip non-used fields from the input to the
hashing algorithm.
On Mon, Feb 6, 2017 at 12:04 AM, Clayton Coleman ***@***.***
>
wrote:
> I'm saying the problem with hashes is a calculation that is inherently
> unstable and not resilient to changes in the resource. Generation is
> stable. As we encounter more problems with hashes, we have to ask whether
> the approach is long term stable. I wanted to ensure we continue to
verify
> our design path against the challenges we face, so reiterating why hashes
> are superior to tracking generation is useful.
>
> On Feb 4, 2017, at 6:48 PM, Michail Kargakis ***@***.***>
> wrote:
>
> What do you mean by generation? The deployment controller compares
> deployment templates with the templates of every replica set they match
and
> if no match is found, the deployment template is hashed and a new replica
> se tis created.
>
> On Sat, Feb 4, 2017 at 7:55 PM, Clayton Coleman <
***@***.***>
> wrote:
>
> > Why doesn't deployment depend on generation?
> >
> > On Feb 4, 2017, at 9:10 AM, Jordan Liggitt ***@***.***>
> > wrote:
> >
> > DeepEqual is orthogonal to defaulting
> >
> > —
> > You are receiving this because you were mentioned.
> > Reply to this email directly, view it on GitHub
> > <https://github.com/kubernetes/kubernetes/pull/
> > 36673#issuecomment-277448210>,
> > or mute the thread
> > <https://github.com/notifications/unsubscribe-auth/ABG_pxBY16FYX6MU_
> > JQngUVSIuSsN1cPks5rZIbogaJpZM4KwN0x>
> > .
> >
> > —
> > You are receiving this because you were mentioned.
> > Reply to this email directly, view it on GitHub
> > <
> #36673#
issuecomment-277466769
> >,
> > or mute the thread
> > <
> https://github.com/notifications/unsubscribe-auth/
> ADuFf66JklBPdnetW3oUI0ZHJ8eDAPbPks5rZMm9gaJpZM4KwN0x
> >
> > .
> >
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/kubernetes/kubernetes/pull/
> 36673#issuecomment-277486670>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/ABG_p0HMbAmorZ8Wu_
> THFftIBFztAXNiks5rZQ40gaJpZM4KwN0x>
> .
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/kubernetes/kubernetes/pull/
36673#issuecomment-277557661>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/ADuFf8b02oG0_
qox75ES7IgthliwWkL1ks5rZlVogaJpZM4KwN0x>
> .
>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#36673 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABG_p-qmw66MuqboYGF3-hAg7HljZQEaks5rZn-WgaJpZM4KwN0x>
.
|
@caesarxuchao @davidopp - is there a reason why "InodePressure" constant is defined only in |
In general the constants should be duplicated in both internal version and external version.
|
|
@dashpole - I'm not sure we can do that, even if it's unused. It would be a breaking API change. @kubernetes/api-reviewers |
You mean for client-go consumers? |
Yup. |
…ioned-API Convert to use versioned api
Unit/integration/submit-queue e2e/slow e2e/node e2e tests passed. The PR is based on the code one or two day after the code freeze. I'll rebase after the PR gets reviewed.
Fix #35159
What are converted to use versioned API
top level:
dependencies:
What remain using internal API in purpose
N.B.
ResourceName
andResouceList
out of the API, we probably can avoid the duplicateTODO
cc @liggitt @deads2k @bgrant0607 @mml @mbohlool @kubernetes/sig-api-machinery
This change is![Reviewable](https://camo.githubusercontent.com/2d899f4291d07d3cd2fa4aaae1e3b243f164c23fce87d30a589ace0d496a444c/68747470733a2f2f72657669657761626c652e6b756265726e657465732e696f2f7265766965775f627574746f6e2e737667)