New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Propose Changes to Cloud Provider KEPs #2628
Conversation
andrewsykim
commented
Sep 4, 2018
- removes any duplicate information in KEP-0002 from KEP-00013
- reduces scope in KEP-00013 to only include proposal for removing in-tree provider code. Renaming the file removed the diff, see last commit to see diff between files.
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: If they are not already assigned, you can assign the PR to them by writing The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
b99c0e8
to
7f787c4
Compare
providers should look at the [Creating a Custom Cluster from Scratch](https://kubernetes.io/docs/getting-started-guides/scratch/#cloud-provider) | ||
and the [cloud provider interface](https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/cloud.go#L31) | ||
which will need to be implemented. | ||
This KEP outlines the architectural changes required to run the Cloud Controller Manager. This is part of a long-running effort |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At some point I believe we need to expand the scope. CCM is a mechanism for allowing cloud specific controllers to be run in an K8s approved manner. I feel that our goal though is to provide a mechanism to allow every cloud provider (and in fact non cloud providers as well) to create installable Kubernetes which run correctly on their configurations. We would like to do that without including any cloud provider code in the main repo. To that end the CCM is necessary but not sufficient and we need to start tracking the additional pieces.
/kind kep |
This PR is over a month old, but I have no objections to it. Where do we stand on it? /lgtm |
My original assumptions for this approach is actually wrong and I need to update the KEP to better present our current approach, will try to get to it this week, thanks for the reminder Chris! |
/lgtm cancel |
/cc |
…t cloud controller manager
…ving-in-tree-providers.md
c7bb7e3
to
fbfb201
Compare
|
||
The biggest challenge of this phase is to remove dependences to kubernetes/kubernetes in all the providers. This is required to avoid circular dependencies within the various Kubernetes repos. All other repos "staged" (`client-go`, `apimachinery`, `api`, etc) in Kubernetes follow the same pattern. Below are the current list of packges in `kubernetes/kubernetes` that we need to removed. Note that _some_ dependencies may need to be duplicated for the duration of this migration as other critical components of Kubernetes may share the same code but removing the dependency may not be trivial. In general, we will avoid duplicating code and only use it as a last resort to resolve circular dependencies to `kubernetes/kubernetes`. | ||
|
||
| Dependency | Proposed Location | Is duplicated? | Partial or Full? | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need help filling this out, if you are assigned to any of the dependencies in kubernetes/kubernetes#69585 please comment here on what the new locations should be :) thanks!
cc @kubernetes/sig-cloud-provider @cheftako @dims @d-nishi @frapposelli @hogepodge @islinwb @mcrute @stewart-yu @yastij
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Realizing now that the Partial vs Full
part is confusing, but this just indicating whether we need to migrate the entire package for our purposes or if we can just pick out the specific methods/types that we need.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pkg/controller
need more investgate, some api need move from k8s.io/kubernetes/pkg/apis/core
, and code from "k8s.io/kubernetes/pkg/apis/core/helper" . i'll update later
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For pkg/controller
, it seems like we already copied ControllerClientBuilder to k8s.io/cloud-provider
and the method AddOrUpdateTaintOnNode
can be moved to k8s.io/cloud-provider/nodes
. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah, sorry I forgot about dependency to k8s.io/kubernetes/pkg/util/taints
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like taintutils.AddOrUpdateTaint
is only used in pkg/controller
so might be reasonable to just move that method into k8s.io/cloud-provider/nodes
as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i move part of taint
op from k8s.io/kubernetes/pkg/util/taints
to k8s.io/apimachinery/pkg/util/taints
. see
kubernetes/kubernetes#70225
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so form table as below:
pkg/controller
k8s.io/cloud-provider/pkg/controller
Yes
Partial
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the pkg
necessary? Can it just be k8s.io/cloud-provider/controller
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i also ok for k8s.io/cloud-provider/controller
Updated the KEP to better represent the approach we are taking now. Hoping it outlines our intentions & steps clearly! Some of the dependencies we want to move don't have a new home yet but hoping we can come to consensus for those here :). Please take a look! cc @cheftako @d-nishi @hogepodge @jagosan @thockin |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I read this over. I had no meaningful comments, but wanted to say thanks for keeping this moving. It's really important.
Thanks Tim! 💙 |
#### Phase 1 - Moving Cloud Provider Code to Staging | ||
|
||
In Phase 1, all cloud provider code in `kubernetes/kubernetes/pkg/cloudprovider/providers` will be moved to `kubernetes/kubernetes/staging/cloudprovider/providers`. Moving code to a staging directory still allows us to build core Kubernetes components (`kube-controller-manager`, `kubelet`, etc) without interruption while also signalling to the community that in-tree provider code is slated for removal. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
k8s.io/kubernetes/staging/cloud-provider/providers
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks
|
||
The biggest challenge of this phase is to remove dependences to kubernetes/kubernetes in all the providers. This is required to avoid circular dependencies within the various Kubernetes repos. All other repos "staged" (`client-go`, `apimachinery`, `api`, etc) in Kubernetes follow the same pattern. Below are the current list of packges in `kubernetes/kubernetes` that we need to removed. Note that _some_ dependencies may need to be duplicated for the duration of this migration as other critical components of Kubernetes may share the same code but removing the dependency may not be trivial. In general, we will avoid duplicating code and only use it as a last resort to resolve circular dependencies to `kubernetes/kubernetes`. | ||
|
||
| Dependency | Proposed Location | Is duplicated? | Partial or Full? | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pkg/controller
need more investgate, some api need move from k8s.io/kubernetes/pkg/apis/core
, and code from "k8s.io/kubernetes/pkg/apis/core/helper" . i'll update later
removal of in-tree providers
fbfb201
to
aebf244
Compare
|
||
In Phase 2, providers will be expected to build cloud controller manager from their respective provider repos (`kubernetes/cloud-provider-<provider>`). Provider repos will be frequently synced from their respective staging repos by the kubernetes publishing bot. Development of providers are still done in `kubernetes/kubernetes/staging/cloudprovider/providers`. | ||
|
||
#### Phase 3 - Migrating Provider Code to Provider Repos |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important to note here that we are not proposing that k8s.io/cloud-provider-<provider>
will be vendored back into k8s.iokubernetes
REMINDER: KEPs are moving to k/enhancements on November 30. Please attempt to merge this KEP before then to signal consensus. Any questions regarding this move should be directed to that thread and not asked on GitHub. |
Will do my best to update this KEP and have someone review by end of this week. No guarantees though because there's a lot of changes here to be reviewed. If it's not merged by deadline, feel free to close and I'll re-open in the new repo. Thanks @justaugustus! |
KEPs have moved to k/enhancements. Any questions regarding this move should be directed to that thread and not asked on GitHub. |
@justaugustus: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |