-
Notifications
You must be signed in to change notification settings - Fork 39.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Uniform Use of Conditions across Workloads API Surface #51594
Comments
I think there is a lot of value in signaling progress and failures via the API. We want to be able to inform users when they have waited long enough for an upgrade. We also want to expose failures such as when a Pod fails to be created in order to provide richer programmatic access to errors than just a "you have waited long enough". Both kinds of conditions enable higher-level orchestrators to do stuff like automatic rollbacks w/o the need to support automatic rollbacks in the API. It's not easy to do anything orchestration-wise today by using events and I don't know if that will ever be the case. |
/sub |
That might be a question for sig architecture, actually. |
Please add the other consumers of conditions for non-core APIs - i.e. the openshift ones referenced in the original PR |
Note that various UIs use conditions heavily. ReplicationControllers are also V1 and use conditions |
Yes, let me find the UI level consumption of conditions and link them in too |
Pods have it as well |
Seems very disadvantageous |
The conclusion of sig architecture is that
What we should do from here is:
|
@kow3ns are there already open issues for the last 3 points you suggested? |
@antoineco I was planning on using this issue. |
ref #7856 |
/close |
FEATURE REQUEST:
tldr: We should either remove Conditions from the workloads API surface or invest in them.
The workloads API does not use Conditions uniformly across the surface.
If they are valuable to users, and if their current use is a best practice, we should use them uniformly across the API surface. If they are not valuable, or if we should really be using Status fields for the workloads API, we should deprecate them (preferably prior to v1).
To achieve consistency across the surface we could take three approaches.
Invest in Conditions - If they have value we should use them more uniformly.
Move Conditions to Fields - Aside from reason and message, the information contained in Deployment and ReplicaSet Conditions can be represented as an enumeration (this will have the same issue as Phase) and a boolean value respectively. We will no longer have the ability communicate messages and reasons via Status, but we could consider providing these asynchronously via Events.
Remove Conditions - If we think (2) is a good idea, we should also consider removing Conditions entirely. Without a reason or a message, the condition field is a summary that is mostly extrapolated form other fields of the objects' Status.
The investment to maintain what is implemented for (1) is non-trivial, and the investment to ship it as a feature that spans the workloads API surface is significant when we account for maintenance cost throughout the lifetime of the API. The detailed data collected in #50798 seems to demonstrate the use of Conditions is far from Universal in our API objects and that most of the fields of Conditions, when implemented, are not used.
(2) and (3) have a one time cost to modify the Deployment and ReplicaSet APIs, and they reduce maintenance overhead across the workloads API, but if we want to converge the storage and processing for ReplicaSet and ReplicationController we will have to carry the deprecation for quite a long time. Also, we should consider that, though they are few, the uses of Deployment Conditions might add legitimate value and warrant adoption in other controllers.
API
@kubernetes/sig-api-machinery-feature-requests
Given #7856 and community #606 we still lack a formal direction on when to use Status fields vs Conditions in the general case. We do seem to agree that we should not use them to build state machines and that they should be orthogonal to Phase. Given the options above, or any others that we should consider, what is consistent with best practices for the k8s API surface?
Apps
@kubernetes/sig-apps-feature-requests
Are Deployment Conditions useful? Is the work required to ship and maintain (1) worth it, or are (2) or (3) sufficient?
The text was updated successfully, but these errors were encountered: