Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨ Add custerctl pivot support for v1alpha2 #1280

Merged
merged 4 commits into from Aug 26, 2019

Conversation

@vincepri
Copy link
Member

commented Aug 19, 2019

What this PR does / why we need it:
This PR adds support for v1alpha2 to clusterctl. In particular, the pivot phase now supports the following:

  • Migrate secrets for each cluster using a string comparison (by checking if the prefix is the cluster name).
  • Migrate all referenced infrastructure and bootstrap objects.

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #1187
Fixes #1112

@vincepri vincepri changed the title ✨ Add custerctl support for v1alpha2 WIP: ✨ Add custerctl support for v1alpha2 Aug 19, 2019
@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

commented Aug 19, 2019

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: vincepri

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot requested review from detiber and justinsb Aug 19, 2019
@k8s-ci-robot k8s-ci-robot added the size/L label Aug 19, 2019
@vincepri

This comment has been minimized.

Copy link
Member Author

commented Aug 20, 2019

@detiber @ncdc Looking for early feedback here before I proceed

@vincepri vincepri changed the title WIP: ✨ Add custerctl support for v1alpha2 WIP: ✨ Add custerctl pivot support for v1alpha2 Aug 20, 2019
@ncdc

This comment has been minimized.

Copy link
Contributor

commented Aug 20, 2019

Starting my review now

cmd/clusterctl/phases/pivot.go Outdated Show resolved Hide resolved
cmd/clusterctl/phases/pivot.go Outdated Show resolved Hide resolved
cmd/clusterctl/phases/pivot.go Outdated Show resolved Hide resolved
cmd/clusterctl/phases/pivot.go Outdated Show resolved Hide resolved
cmd/clusterctl/phases/pivot.go Outdated Show resolved Hide resolved
@vincepri vincepri force-pushed the vincepri:clusterctl-support-v1a2 branch 4 times, most recently from 56fa138 to 94f238d Aug 20, 2019
@k8s-ci-robot k8s-ci-robot added size/XL and removed size/L labels Aug 22, 2019
@vincepri vincepri force-pushed the vincepri:clusterctl-support-v1a2 branch from 2c63654 to ff198f6 Aug 22, 2019
@k8s-ci-robot k8s-ci-robot added needs-rebase and removed size/XL labels Aug 22, 2019
@vincepri vincepri changed the title WIP: ✨ Add custerctl pivot support for v1alpha2 ✨ Add custerctl pivot support for v1alpha2 Aug 22, 2019
@vincepri vincepri force-pushed the vincepri:clusterctl-support-v1a2 branch from ff198f6 to 6e3925a Aug 22, 2019
@vincepri vincepri force-pushed the vincepri:clusterctl-support-v1a2 branch from 6e3925a to 8353a31 Aug 22, 2019
@vincepri vincepri force-pushed the vincepri:clusterctl-support-v1a2 branch from 8353a31 to 59c6743 Aug 22, 2019
@vincepri vincepri force-pushed the vincepri:clusterctl-support-v1a2 branch from 59c6743 to fed433f Aug 22, 2019
klog.V(4).Infof("Retrieving list of MachineSets for MachineDeployment %s/%s", md.Namespace, md.Name)
machineSets, err := from.GetMachineSetsForMachineDeployment(md)
if err != nil {
return err
}

if err := moveMachineSets(from, to, machineSets); err != nil {

This comment has been minimized.

Copy link
@detiber

detiber Aug 22, 2019

Member

When we discussed this previously, we said we could probably simplify the logic quite a bit by making sure the controllers were not running in both the source and target clusters during the pivot, then we wouldn't have to worry about ordering of resources and could just migrate wholesale.

This comment has been minimized.

Copy link
@vincepri

vincepri Aug 22, 2019

Author Member

We definitely can, it seems like a good follow-up improvement task. The trick there is to scale down the target cluster deployments and then scale them back up at the end.

This comment has been minimized.

Copy link
@detiber

detiber Aug 22, 2019

Member

Or to filter the provider components to not apply the Deployments prior to migrating the resources, and only apply them afterwards.

This comment has been minimized.

Copy link
@ncdc

ncdc Aug 23, 2019

Contributor

We also talked about having pivot (or future clusteradm) add either an annotation or a label to each object that's being pivoted in the source cluster, then adding code to all our controllers to no-op and return ASAP if they see that label/annotation. That way, we don't have to worry about the controllers racing.

@vincepri vincepri force-pushed the vincepri:clusterctl-support-v1a2 branch 2 times, most recently from 82fe348 to f930935 Aug 22, 2019
cmd/clusterctl/phases/pivot.go Show resolved Hide resolved
util/restmapper/cached.go Outdated Show resolved Hide resolved
util/yaml/yaml.go Outdated Show resolved Hide resolved
@vincepri vincepri force-pushed the vincepri:clusterctl-support-v1a2 branch 4 times, most recently from 0d405dc to 2ef9601 Aug 22, 2019
klog.V(4).Infof("Retrieving list of MachineSets for MachineDeployment %s/%s", md.Namespace, md.Name)
machineSets, err := from.GetMachineSetsForMachineDeployment(md)
if err != nil {
return err
}

if err := moveMachineSets(from, to, machineSets); err != nil {

This comment has been minimized.

Copy link
@ncdc

ncdc Aug 23, 2019

Contributor

We also talked about having pivot (or future clusteradm) add either an annotation or a label to each object that's being pivoted in the source cluster, then adding code to all our controllers to no-op and return ASAP if they see that label/annotation. That way, we don't have to worry about the controllers racing.

cmd/clusterctl/phases/pivot.go Show resolved Hide resolved
@@ -309,7 +380,7 @@ func moveMachines(from sourceClient, to targetClient, machines []*clusterv1.Mach
klog.V(4).Infof("Preparing to move Machines: %v", machineNames)

for _, m := range machines {
if m.DeletionTimestamp != nil {
if m.DeletionTimestamp != nil && !m.DeletionTimestamp.IsZero() {

This comment has been minimized.

Copy link
@ncdc

ncdc Aug 23, 2019

Contributor

Just the IsZero() call is sufficient (it handles nil correctly).

This comment has been minimized.

Copy link
@detiber

detiber Aug 23, 2019

Member

Since the controllers are supposedly stopped (or no longer processing), we can likely ignore the deletion timestamp everywhere here and pivot the resources anyway, otherwise we could potentially end up in a scenario where we orphan resources.

This comment has been minimized.

Copy link
@ncdc

ncdc Aug 23, 2019

Contributor

It would be nice to assume that during a pivot, nothing is in the process of being deleted. That's probably an ok assumption for now. But if/when we do clusteradm and we move pivot server-side, and if we allow it to happen at any time, we probably should skip over deleted objects.

This comment has been minimized.

Copy link
@vincepri

vincepri Aug 23, 2019

Author Member

Actually this is necessary because we get all Machines regardless if they are or not in a cluster. We need to filter because the objects in process of deletion.


targetObject := u.DeepCopy()

This comment has been minimized.

Copy link
@ncdc

ncdc Aug 23, 2019

Contributor

Is there any reason to check if it has a non-zero deletion timestamp?

This comment has been minimized.

Copy link
@detiber

detiber Aug 23, 2019

Member

Probably not, if the object still has a finalizer and has yet to be pivoted after the controller is stopped (or no longer processing because of annotation), then we should likely pivot the resource anyway.

@@ -178,29 +185,42 @@ func (c *client) GetKubeconfigFromSecret(namespace, clusterName string) (string,
return string(data), nil
}

func (c *client) ScaleStatefulSet(ns string, name string, scale int32) error {
func (c *client) ScaleDeployment(ns string, name string, scale int32) error {

This comment has been minimized.

Copy link
@detiber

detiber Aug 23, 2019

Member

Should this name be updated to indicate that it is waiting for the scaling to complete?

This comment has been minimized.

Copy link
@vincepri

vincepri Aug 23, 2019

Author Member

We can rename it, although most of the other methods wait as well and don't really communicate it to clients

return nil
}

func (c *client) GetUnstructuredObject(u *unstructured.Unstructured) error {

This comment has been minimized.

Copy link
@detiber

detiber Aug 23, 2019

Member

Would it make sense to just expose the client and calls these methods directly and only have methods here that provide some extra utility on top of the controller-runtime client?

This comment has been minimized.

Copy link
@vincepri

vincepri Aug 23, 2019

Author Member

Maybe in a future iteration? With the current abstraction, it'll make testing much harder. This whole client has to go away imo and be replaced with just a controller runtime one

"k8s.io/klog"
clusterv1 "sigs.k8s.io/cluster-api/api/v1alpha2"
"sigs.k8s.io/cluster-api/cmd/clusterctl/clusterdeployer/clusterclient"
)

func ApplyMachines(client clusterclient.Client, namespace string, machines []*clusterv1.Machine) error {
func ApplyMachines(client clusterclient.Client, namespace string, machines []*clusterv1.Machine, extra ...*unstructured.Unstructured) error {

This comment has been minimized.

Copy link
@detiber

detiber Aug 23, 2019

Member

I'm wondering if it would also help to break this out into two separate phases now: ApplyControlPlaneMachines and ApplyWorkerMachines, where they filter the incoming yaml appropriately... ApplyControlPlaneMachines should also perform the application serially for now as well.

This comment has been minimized.

Copy link
@vincepri

vincepri Aug 23, 2019

Author Member

Probably yeah, let's open an issue about it?

@@ -309,7 +380,7 @@ func moveMachines(from sourceClient, to targetClient, machines []*clusterv1.Mach
klog.V(4).Infof("Preparing to move Machines: %v", machineNames)

for _, m := range machines {
if m.DeletionTimestamp != nil {
if m.DeletionTimestamp != nil && !m.DeletionTimestamp.IsZero() {

This comment has been minimized.

Copy link
@detiber

detiber Aug 23, 2019

Member

Since the controllers are supposedly stopped (or no longer processing), we can likely ignore the deletion timestamp everywhere here and pivot the resources anyway, otherwise we could potentially end up in a scenario where we orphan resources.


targetObject := u.DeepCopy()

This comment has been minimized.

Copy link
@detiber

detiber Aug 23, 2019

Member

Probably not, if the object still has a finalizer and has yet to be pivoted after the controller is stopped (or no longer processing because of annotation), then we should likely pivot the resource anyway.

vincepri added 3 commits Aug 19, 2019
Signed-off-by: Vince Prignano <vincepri@vmware.com>
Signed-off-by: Vince Prignano <vincepri@vmware.com>
Signed-off-by: Vince Prignano <vincepri@vmware.com>
@vincepri vincepri force-pushed the vincepri:clusterctl-support-v1a2 branch from 2ef9601 to c4d235f Aug 23, 2019
Signed-off-by: Vince Prignano <vincepri@vmware.com>
@ncdc

This comment has been minimized.

Copy link
Contributor

commented Aug 26, 2019

@vincepri anything left to do on this one?

@vincepri

This comment has been minimized.

Copy link
Member Author

commented Aug 26, 2019

I think it should be good to go, creation and pivot are fully supported now through the normal clusterctl create and the pivot alpha phase.

For deletion and the other alpha phases, I might follow-up with further PRs, maybe after release

@ncdc

This comment has been minimized.

Copy link
Contributor

commented Aug 26, 2019

/lgtm

@ncdc

This comment has been minimized.

Copy link
Contributor

commented Aug 26, 2019

/retest

@vincepri

This comment has been minimized.

Copy link
Member Author

commented Aug 26, 2019

/test pull-cluster-api-test

@k8s-ci-robot k8s-ci-robot merged commit 15b58ee into kubernetes-sigs:master Aug 26, 2019
11 of 13 checks passed
11 of 13 checks passed
Header rules No header rules processed
Details
Pages changed 19 new files uploaded
Details
Mixed content No mixed content detected
Details
Redirect rules 3 redirect rules processed
Details
cla/linuxfoundation vincepri authorized
Details
deploy/netlify Deploy preview ready!
Details
pull-cluster-api-build Job succeeded.
Details
pull-cluster-api-integration Job succeeded.
Details
pull-cluster-api-make Job succeeded.
Details
pull-cluster-api-test Job succeeded.
Details
pull-cluster-api-vendor-in-sync Job succeeded.
Details
pull-cluster-api-verify Job succeeded.
Details
tide In merge pool.
Details
@joonas joonas referenced this pull request Aug 30, 2019
6 of 6 tasks complete
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
6 participants
You can’t perform that action at this time.