New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deprecate cloudstack and ovirt controller projects #68199

Merged
merged 1 commit into from Sep 9, 2018

Conversation

@dims
Member

dims commented Sep 3, 2018

Change-Id: Icca9142940269ad1cd28f1f3491684a1bc626c55

What this PR does / why we need it:
Do we have folks invested in these providers trying to work on the external controllers for these providers? Is there a future for these providers? If not can we deprecate and eventually remove them?

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #

Special notes for your reviewer:
cc @ngtuna @sebgoa @svanharmelen (for cloudstack)
cc @simon3z

Release note:

Deprecate cloudstack and ovirt controllers
Deprecate cloudstack and ovirt controller projects
Change-Id: Icca9142940269ad1cd28f1f3491684a1bc626c55
@dims

This comment has been minimized.

Show comment
Hide comment
@dims

dims Sep 3, 2018

Member

/sig cloud-provider

Member

dims commented Sep 3, 2018

/sig cloud-provider

@sebgoa

This comment has been minimized.

Show comment
Hide comment
@sebgoa

sebgoa Sep 4, 2018

Member

I will ping the cloudstack developer list.

In general though, it is not because you don't see activity that something is not used.

I do have one quesstion, all the cloud providers were supposed to be moved out of the main repo. What happened to that effort ?

Member

sebgoa commented Sep 4, 2018

I will ping the cloudstack developer list.

In general though, it is not because you don't see activity that something is not used.

I do have one quesstion, all the cloud providers were supposed to be moved out of the main repo. What happened to that effort ?

@dims

This comment has been minimized.

Show comment
Hide comment
@dims

dims Sep 4, 2018

Member

@sebgoa this is part of that effort, we need folks to pick up work for making these external.

fyi, the first one to be removed from k/k will be the openstack provider, a WIP is in #67782 and will be merged when v1.13 opens up.

Member

dims commented Sep 4, 2018

@sebgoa this is part of that effort, we need folks to pick up work for making these external.

fyi, the first one to be removed from k/k will be the openstack provider, a WIP is in #67782 and will be merged when v1.13 opens up.

@sebgoa

This comment has been minimized.

Show comment
Hide comment
@sebgoa

sebgoa Sep 4, 2018

Member

so where do the cloud providers code go once removed ?

Member

sebgoa commented Sep 4, 2018

so where do the cloud providers code go once removed ?

@dims

This comment has been minimized.

Show comment
Hide comment
@dims

dims Sep 4, 2018

Member

Short answer:
https://github.com/kubernetes?utf8=%E2%9C%93&q=cloud-provider&type=&language=

Longer answer:
Please ask whoever is going to be doing this to join the slack channel #sig-cloud-provider and review the keps and get active in the effort
https://github.com/kubernetes/community/tree/master/keps/sig-cloud-provider

Member

dims commented Sep 4, 2018

Short answer:
https://github.com/kubernetes?utf8=%E2%9C%93&q=cloud-provider&type=&language=

Longer answer:
Please ask whoever is going to be doing this to join the slack channel #sig-cloud-provider and review the keps and get active in the effort
https://github.com/kubernetes/community/tree/master/keps/sig-cloud-provider

@sebgoa

This comment has been minimized.

Show comment
Hide comment
@sebgoa

sebgoa Sep 4, 2018

Member

ok thanks, the info is being propagated

Member

sebgoa commented Sep 4, 2018

ok thanks, the info is being propagated

@sandrobonazzola

This comment has been minimized.

Show comment
Hide comment
@sandrobonazzola

sandrobonazzola Sep 4, 2018

For oVirt side I'm propagating the info, please hold on.

sandrobonazzola commented Sep 4, 2018

For oVirt side I'm propagating the info, please hold on.

@dims

This comment has been minimized.

Show comment
Hide comment
Member

dims commented Sep 4, 2018

@rhtyd

This comment has been minimized.

Show comment
Hide comment
@rhtyd

rhtyd Sep 7, 2018

Hi @dims, some CloudStack users are using the k8s CloudStack provider for external LB (https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer) service and to add/remove LB rules for a network of a (container) cluster. We may want to keep that in for such users and also in case in future we may want to add support for more features. We also have an ongoing k8s integration plugin which may need further support it in future: https://github.com/shapeblue/ccs

rhtyd commented Sep 7, 2018

Hi @dims, some CloudStack users are using the k8s CloudStack provider for external LB (https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer) service and to add/remove LB rules for a network of a (container) cluster. We may want to keep that in for such users and also in case in future we may want to add support for more features. We also have an ongoing k8s integration plugin which may need further support it in future: https://github.com/shapeblue/ccs

@dims

This comment has been minimized.

Show comment
Hide comment
@dims

dims Sep 7, 2018

Member

@rhtyd the mandate is to clean up k/k repository of all cloud providers. So unless there is a owner to move the code out into external repositories, the code will be deprecated and then removed. So please tell folks that they need to show up and help move stuff out so things can be still used. This is for all cloud providers!

Member

dims commented Sep 7, 2018

@rhtyd the mandate is to clean up k/k repository of all cloud providers. So unless there is a owner to move the code out into external repositories, the code will be deprecated and then removed. So please tell folks that they need to show up and help move stuff out so things can be still used. This is for all cloud providers!

@rhtyd

This comment has been minimized.

Show comment
Hide comment
@rhtyd

rhtyd commented Sep 7, 2018

Thanks @dims. @ustcweizhou can you take a lead on this? /cc other ACS contributors - @wido @DaanHoogland @mike-tutkowski @khos2ow @swill @rafaelweingartner @PaulAngus @marcaurele

@rafaelweingartner

This comment has been minimized.

Show comment
Hide comment
@rafaelweingartner

rafaelweingartner Sep 7, 2018

@rhtyd it seems that we will need a new repository at Apache org to host this kuerbenets project. We will need to ask infra for that, right?

rafaelweingartner commented Sep 7, 2018

@rhtyd it seems that we will need a new repository at Apache org to host this kuerbenets project. We will need to ask infra for that, right?

@rhtyd

This comment has been minimized.

Show comment
Hide comment
@rhtyd

rhtyd Sep 7, 2018

@rafaelweingartner I think it's best to keep the provider under k8s, like https://github.com/kubernetes/cloud-provider-openstack ? @dims can projects host/maintain providers outside of k8s github organisation, and can it have any impact wrt releases and integration?

rhtyd commented Sep 7, 2018

@rafaelweingartner I think it's best to keep the provider under k8s, like https://github.com/kubernetes/cloud-provider-openstack ? @dims can projects host/maintain providers outside of k8s github organisation, and can it have any impact wrt releases and integration?

@dims

This comment has been minimized.

Show comment
Hide comment
@dims

dims Sep 7, 2018

Member

@rhtyd Totally, the repos can be anywhere! that's how we are going about it. Not all providers will move into kubernetes org. they should live where folks related to it congregate and take care of it. So +1 to do the cloudstack one in apache.org

Member

dims commented Sep 7, 2018

@rhtyd Totally, the repos can be anywhere! that's how we are going about it. Not all providers will move into kubernetes org. they should live where folks related to it congregate and take care of it. So +1 to do the cloudstack one in apache.org

@rafaelweingartner

This comment has been minimized.

Show comment
Hide comment
@rafaelweingartner

rafaelweingartner Sep 7, 2018

I think I misunderstood then. I thought that this movement of code to specific repositories was a shift to move the "burden" on the management of such drivers/plugins into the cloud platform that want to integrate with Kubernetes.

Anyways, I think we (ACS community) can work together to maintain and develop this integration.

@rhtyd do you think it is better to maintain the code under Kubernetes org?Or, maybe a shift to Apache org where we (PMCs and committers) will have a more direct control over the code?

At first sight, it seems that if we keep the code under Kubernetes org, there will be a smoother integration process with the life-cycle of Kuerbenetes release process.

rafaelweingartner commented Sep 7, 2018

I think I misunderstood then. I thought that this movement of code to specific repositories was a shift to move the "burden" on the management of such drivers/plugins into the cloud platform that want to integrate with Kubernetes.

Anyways, I think we (ACS community) can work together to maintain and develop this integration.

@rhtyd do you think it is better to maintain the code under Kubernetes org?Or, maybe a shift to Apache org where we (PMCs and committers) will have a more direct control over the code?

At first sight, it seems that if we keep the code under Kubernetes org, there will be a smoother integration process with the life-cycle of Kuerbenetes release process.

@andrewsykim

This comment has been minimized.

Show comment
Hide comment
@andrewsykim

andrewsykim Sep 7, 2018

Member

We as a community are trying to uphold certain technical standards for cloud providers in the kubernetes (or kubernetes-sigs) org. You can see them here https://github.com/kubernetes/community/blob/master/keps/sig-cloud-provider/providers/0004-cloud-provider-template.md#prerequisites. For in-tree providers we have been making some exceptions for smoother transition, however, if those requirements can't be meant in the near future, as dims mentioned, I would strongly recommend just creating the new repository in your own org as providers without clear ownership will be deprecated going forward.

Hope that helps! :)

Member

andrewsykim commented Sep 7, 2018

We as a community are trying to uphold certain technical standards for cloud providers in the kubernetes (or kubernetes-sigs) org. You can see them here https://github.com/kubernetes/community/blob/master/keps/sig-cloud-provider/providers/0004-cloud-provider-template.md#prerequisites. For in-tree providers we have been making some exceptions for smoother transition, however, if those requirements can't be meant in the near future, as dims mentioned, I would strongly recommend just creating the new repository in your own org as providers without clear ownership will be deprecated going forward.

Hope that helps! :)

@dims

This comment has been minimized.

Show comment
Hide comment
@dims

dims Sep 7, 2018

Member

Also please note that this PR is just about deprecation, not actual removal of code. Deprecation policy is here:
https://kubernetes.io/docs/reference/using-api/deprecation-policy/

Member

dims commented Sep 7, 2018

Also please note that this PR is just about deprecation, not actual removal of code. Deprecation policy is here:
https://kubernetes.io/docs/reference/using-api/deprecation-policy/

@dims dims changed the title from [WIP] Deprecate cloudstack and ovirt controller projects to Deprecate cloudstack and ovirt controller projects Sep 7, 2018

@dims

This comment has been minimized.

Show comment
Hide comment
@dims

dims Sep 7, 2018

Member

/milestone v1.12

Member

dims commented Sep 7, 2018

/milestone v1.12

@k8s-ci-robot k8s-ci-robot added this to the v1.12 milestone Sep 7, 2018

@dims

This comment has been minimized.

Show comment
Hide comment
@dims

dims Sep 7, 2018

Member

@andrewsykim since everyone that needed to be notified have been notified and discussions have begun. Can we please move this forward for 1.12?

Member

dims commented Sep 7, 2018

@andrewsykim since everyone that needed to be notified have been notified and discussions have begun. Can we please move this forward for 1.12?

@andrewsykim

This comment has been minimized.

Show comment
Hide comment
@andrewsykim

andrewsykim Sep 7, 2018

Member

Sounds good to me. Given there are no clear owners to either provider (i.e no one in OWNERS file) and this is only for deprecation warning I think it's safe to move forward on this. Thanks @dims!

/lgtm

Member

andrewsykim commented Sep 7, 2018

Sounds good to me. Given there are no clear owners to either provider (i.e no one in OWNERS file) and this is only for deprecation warning I think it's safe to move forward on this. Thanks @dims!

/lgtm

@k8s-ci-robot

This comment has been minimized.

Show comment
Hide comment
@k8s-ci-robot

k8s-ci-robot Sep 7, 2018

Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: andrewsykim, dims

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Contributor

k8s-ci-robot commented Sep 7, 2018

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: andrewsykim, dims

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@dims

This comment has been minimized.

Show comment
Hide comment
@dims

dims Sep 8, 2018

Member

/priority critical-urgent

Member

dims commented Sep 8, 2018

/priority critical-urgent

@dims

This comment has been minimized.

Show comment
Hide comment
@dims

dims Sep 8, 2018

Member

/test pull-kubernetes-e2e-kops-aws

Member

dims commented Sep 8, 2018

/test pull-kubernetes-e2e-kops-aws

@k8s-merge-robot

This comment has been minimized.

Show comment
Hide comment
@k8s-merge-robot

k8s-merge-robot Sep 9, 2018

Contributor

/test all [submit-queue is verifying that this PR is safe to merge]

Contributor

k8s-merge-robot commented Sep 9, 2018

/test all [submit-queue is verifying that this PR is safe to merge]

@k8s-merge-robot

This comment has been minimized.

Show comment
Hide comment
@k8s-merge-robot

k8s-merge-robot Sep 9, 2018

Contributor

Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.

Contributor

k8s-merge-robot commented Sep 9, 2018

Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.

@k8s-merge-robot k8s-merge-robot merged commit 8d1127c into kubernetes:master Sep 9, 2018

17 of 18 checks passed

Submit Queue Required Github CI test is not green: pull-kubernetes-kubemark-e2e-gce-big
Details
cla/linuxfoundation dims authorized
Details
pull-kubernetes-bazel-build Job succeeded.
Details
pull-kubernetes-bazel-test Job succeeded.
Details
pull-kubernetes-cross Skipped
pull-kubernetes-e2e-gce Job succeeded.
Details
pull-kubernetes-e2e-gce-100-performance Job succeeded.
Details
pull-kubernetes-e2e-gce-device-plugin-gpu Job succeeded.
Details
pull-kubernetes-e2e-gke Skipped
pull-kubernetes-e2e-kops-aws Job succeeded.
Details
pull-kubernetes-e2e-kubeadm-gce Skipped
pull-kubernetes-integration Job succeeded.
Details
pull-kubernetes-kubemark-e2e-gce-big Job succeeded.
Details
pull-kubernetes-local-e2e Skipped
pull-kubernetes-local-e2e-containerized Skipped
pull-kubernetes-node-e2e Job succeeded.
Details
pull-kubernetes-typecheck Job succeeded.
Details
pull-kubernetes-verify Job succeeded.
Details
@sebgoa

This comment has been minimized.

Show comment
Hide comment
@sebgoa

sebgoa Sep 9, 2018

Member

@dims I am bit surprised by this merge. you pinged us 6 days ago, we are actively trying to figure out the best path forward and identify who relies on this bit of code.

and then "boom" merged, this is not very community engaging. If you wanted us to engage in the sig-cloud-provider this was definitely not the way to go about it.

Member

sebgoa commented Sep 9, 2018

@dims I am bit surprised by this merge. you pinged us 6 days ago, we are actively trying to figure out the best path forward and identify who relies on this bit of code.

and then "boom" merged, this is not very community engaging. If you wanted us to engage in the sig-cloud-provider this was definitely not the way to go about it.

@dims

This comment has been minimized.

Show comment
Hide comment
@dims

dims Sep 9, 2018

Member

@sebgoa Apologies. A bit more context, If we announce it now for 1.12, the actual code removal will be at least 3+ releases away. Hence the rush to get it in.

Member

dims commented Sep 9, 2018

@sebgoa Apologies. A bit more context, If we announce it now for 1.12, the actual code removal will be at least 3+ releases away. Hence the rush to get it in.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment