New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MULTIARCH-4096: PowerVS: Generate the cluster assets #8116
MULTIARCH-4096: PowerVS: Generate the cluster assets #8116
Conversation
@hamzy: This pull request references MULTIARCH-4096 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the sub-task to target the "4.16.0" version, but no target version was set. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
3ffb4ce
to
ad5f3ac
Compare
5f81298
to
014741f
Compare
/retest-required |
ba9e604
to
b3b23a0
Compare
@hamzy: This pull request references MULTIARCH-4096 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the sub-task to target the "4.16.0" version, but no target version was set. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
if installConfig.Config.Platform.PowerVS.PowerVSResourceGroup != "" { | ||
// @TODO is it a name or id? | ||
resourceGroupName = installConfig.Config.Platform.PowerVS.PowerVSResourceGroup | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if installConfig.Config.Platform.PowerVS.PowerVSResourceGroup != "" { | |
// @TODO is it a name or id? | |
resourceGroupName = installConfig.Config.Platform.PowerVS.PowerVSResourceGroup | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The TODO means future work/investigation. I am trying to get this PR in so I don't have to juggle code for a month.
default: | ||
return nil, fmt.Errorf("GenerateClusterAssets: Region is empty") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm guessing this is here just for completeness and you're already validating the region during the installconfig validation stage?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, for completeness and to stop someone else from asking why no default.
ServiceInstance: &service, | ||
Zone: &installConfig.Config.Platform.PowerVS.Zone, | ||
ResourceGroup: &capibm.IBMPowerVSResourceReference{ | ||
Name: &resourceGroupName, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Name: &resourceGroupName, | |
Name: &installConfig.Config.Platform.PowerVS.PowerVSResourceGroup, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is part of the future TODO
b3b23a0
to
106ad0c
Compare
/lgtm |
106ad0c
to
2479ed6
Compare
0e224a7
to
6479343
Compare
/lgtm |
/remove-lgtm |
6479343
to
1fc7115
Compare
/lgtm |
1fc7115
to
b013ab1
Compare
This adds a controller to run for the PowerVS provider.
Generate the cluster assets for the PowerVS CAPI provider.
Fix machine manifest generation and add a bootstrap machine.
b013ab1
to
94f54c2
Compare
When the CAPI infrastructure is ready, 1) Create the DNS entries 2) Add security group rules for necessary ports
The PowerVS CAPI controller creates three networks for each of the endpoints. Therefore, account for this change.
94f54c2
to
fe29e84
Compare
// Avoid empty manifest elements | ||
logrus.Debugf("GenerateClusterAssets: len(VPCSubnets) = %d", len(installConfig.Config.Platform.PowerVS.VPCSubnets)) | ||
if len(installConfig.Config.Platform.PowerVS.VPCSubnets) > 0 { | ||
powerVSCluster.Spec.VPCSubnets = make([]capibm.Subnet, 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: can preallocate the array here
powerVSCluster.Spec.VPCSubnets = make([]capibm.Subnet, 0) | |
powerVSCluster.Spec.VPCSubnets = make([]capibm.Subnet, 0, len(installConfig.Config.Platform.PowerVS.VPCSubnets) |
for i := range installConfig.Config.Platform.PowerVS.VPCSubnets { | ||
// We cannot get the string in the loop and add it as it appears duplicated in the generated file | ||
powerVSCluster.Spec.VPCSubnets = append(powerVSCluster.Spec.VPCSubnets, | ||
capibm.Subnet{ | ||
ID: &installConfig.Config.Platform.PowerVS.VPCSubnets[i], | ||
}) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: not that it matters, but you can get the string if you do:
for i := range installConfig.Config.Platform.PowerVS.VPCSubnets { | |
// We cannot get the string in the loop and add it as it appears duplicated in the generated file | |
powerVSCluster.Spec.VPCSubnets = append(powerVSCluster.Spec.VPCSubnets, | |
capibm.Subnet{ | |
ID: &installConfig.Config.Platform.PowerVS.VPCSubnets[i], | |
}) | |
} | |
for _, subnet := range installConfig.Config.Platform.PowerVS.VPCSubnets { | |
subnet := subnet | |
powerVSCluster.Spec.VPCSubnets = append(powerVSCluster.Spec.VPCSubnets, | |
capibm.Subnet{ | |
ID: &subnet, | |
}) | |
} |
logrus.Debugf("in.InfraID = %s", in.InfraID) | ||
idx = strings.LastIndex(in.InfraID, "-") | ||
logrus.Debugf("idx = %d", idx) | ||
substr = in.InfraID[idx:] | ||
logrus.Debugf("substr = %s", substr) | ||
infraID = strings.ReplaceAll(in.InfraID, substr, "") | ||
logrus.Debugf("infraID = %s", infraID) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
logrus.Debugf("in.InfraID = %s", in.InfraID) | |
idx = strings.LastIndex(in.InfraID, "-") | |
logrus.Debugf("idx = %d", idx) | |
substr = in.InfraID[idx:] | |
logrus.Debugf("substr = %s", substr) | |
infraID = strings.ReplaceAll(in.InfraID, substr, "") | |
logrus.Debugf("infraID = %s", infraID) | |
logrus.Debugf("in.InfraID = %s", in.InfraID) | |
infraID := in.InfraID[:strings.LastIndex(in.InfraID, "-")] | |
logrus.Debugf("infraID = %s", infraID) |
switch { | ||
case refServiceInstance.ID != nil: | ||
logrus.Debugf("PostProvision: CreateSSHKey: si id = %s, key = %s", | ||
*refServiceInstance.ID, | ||
in.InstallConfig.Config.SSHKey) | ||
|
||
backoff := wait.Backoff{ | ||
Duration: 15 * time.Second, | ||
Factor: 1.1, | ||
Cap: leftInContext(ctx), | ||
Steps: math.MaxInt32} | ||
err = wait.ExponentialBackoffWithContext(ctx, backoff, func(context.Context) (bool, error) { | ||
err2 := client.CreateSSHKey(ctx, | ||
*refServiceInstance.ID, | ||
*powerVSMachine.Status.Zone, | ||
sshKeyName, | ||
in.InstallConfig.Config.SSHKey) | ||
if err2 == nil { | ||
return true, nil | ||
} | ||
return false, err2 | ||
}) | ||
if err != nil { | ||
return fmt.Errorf("failed to add SSH key for the workers(ID): %w", err) | ||
} | ||
case refServiceInstance.Name != nil: | ||
logrus.Debugf("PostProvision: CreateSSHKey: si name = %s, key = %s", | ||
*refServiceInstance.Name, | ||
in.InstallConfig.Config.SSHKey) | ||
|
||
vpc, err := client.GetVPCByName(ctx, *refServiceInstance.Name) | ||
if err != nil { | ||
return fmt.Errorf("failed to find id for VPC name %s: %w", | ||
*refServiceInstance.Name, | ||
err) | ||
} | ||
|
||
backoff := wait.Backoff{ | ||
Duration: 15 * time.Second, | ||
Factor: 1.1, | ||
Cap: leftInContext(ctx), | ||
Steps: math.MaxInt32} | ||
err = wait.ExponentialBackoffWithContext(ctx, backoff, func(context.Context) (bool, error) { | ||
err2 := client.CreateSSHKey(ctx, | ||
*vpc.ID, | ||
*powerVSMachine.Status.Zone, | ||
sshKeyName, | ||
in.InstallConfig.Config.SSHKey) | ||
if err2 == nil { | ||
return true, nil | ||
} | ||
return false, err2 | ||
}) | ||
if err != nil { | ||
return fmt.Errorf("failed to add SSH key for the workers(Name): %w", err) | ||
} | ||
default: | ||
return fmt.Errorf("could not handle powerVSMachine.Spec.ServiceInstance") | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit:
switch { | |
case refServiceInstance.ID != nil: | |
logrus.Debugf("PostProvision: CreateSSHKey: si id = %s, key = %s", | |
*refServiceInstance.ID, | |
in.InstallConfig.Config.SSHKey) | |
backoff := wait.Backoff{ | |
Duration: 15 * time.Second, | |
Factor: 1.1, | |
Cap: leftInContext(ctx), | |
Steps: math.MaxInt32} | |
err = wait.ExponentialBackoffWithContext(ctx, backoff, func(context.Context) (bool, error) { | |
err2 := client.CreateSSHKey(ctx, | |
*refServiceInstance.ID, | |
*powerVSMachine.Status.Zone, | |
sshKeyName, | |
in.InstallConfig.Config.SSHKey) | |
if err2 == nil { | |
return true, nil | |
} | |
return false, err2 | |
}) | |
if err != nil { | |
return fmt.Errorf("failed to add SSH key for the workers(ID): %w", err) | |
} | |
case refServiceInstance.Name != nil: | |
logrus.Debugf("PostProvision: CreateSSHKey: si name = %s, key = %s", | |
*refServiceInstance.Name, | |
in.InstallConfig.Config.SSHKey) | |
vpc, err := client.GetVPCByName(ctx, *refServiceInstance.Name) | |
if err != nil { | |
return fmt.Errorf("failed to find id for VPC name %s: %w", | |
*refServiceInstance.Name, | |
err) | |
} | |
backoff := wait.Backoff{ | |
Duration: 15 * time.Second, | |
Factor: 1.1, | |
Cap: leftInContext(ctx), | |
Steps: math.MaxInt32} | |
err = wait.ExponentialBackoffWithContext(ctx, backoff, func(context.Context) (bool, error) { | |
err2 := client.CreateSSHKey(ctx, | |
*vpc.ID, | |
*powerVSMachine.Status.Zone, | |
sshKeyName, | |
in.InstallConfig.Config.SSHKey) | |
if err2 == nil { | |
return true, nil | |
} | |
return false, err2 | |
}) | |
if err != nil { | |
return fmt.Errorf("failed to add SSH key for the workers(Name): %w", err) | |
} | |
default: | |
return fmt.Errorf("could not handle powerVSMachine.Spec.ServiceInstance") | |
} | |
var instanceID *string | |
var fieldType string | |
switch { | |
case refServiceInstance.ID != nil: | |
logrus.Debugf("PostProvision: CreateSSHKey: si id = %s, key = %s", | |
*refServiceInstance.ID, | |
in.InstallConfig.Config.SSHKey) | |
instanceID = refServiceInstance.ID | |
fieldType = "ID" | |
case refServiceInstance.Name != nil: | |
logrus.Debugf("PostProvision: CreateSSHKey: si name = %s, key = %s", | |
*refServiceInstance.Name, | |
in.InstallConfig.Config.SSHKey) | |
vpc, err := client.GetVPCByName(ctx, *refServiceInstance.Name) | |
if err != nil { | |
return fmt.Errorf("failed to find id for VPC name %s: %w", | |
*refServiceInstance.Name, | |
err) | |
} | |
instanceID = vpc.ID | |
fieldType = "Name" | |
default: | |
return fmt.Errorf("could not handle powerVSMachine.Spec.ServiceInstance") | |
} | |
backoff := wait.Backoff{ | |
Duration: 15 * time.Second, | |
Factor: 1.1, | |
Cap: leftInContext(ctx), | |
Steps: math.MaxInt32, | |
} | |
err = wait.ExponentialBackoffWithContext(ctx, backoff, func(context.Context) (bool, error) { | |
err2 := client.CreateSSHKey(ctx, *instanceID, *powerVSMachine.Status.Zone, sshKeyName, in.InstallConfig.Config.SSHKey) | |
if err2 == nil { | |
return true, nil | |
} | |
return false, err2 | |
}) | |
if err != nil { | |
return fmt.Errorf("failed to add SSH key for the workers(%s): %w", fieldType, err) | |
} |
@r4f4 can we address these in a follow up patch? We have other work that is based on this that we do not want to break. |
/lgtm |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: r4f4 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@hamzy: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
0645657
into
openshift:master
Generate the cluster assets for the PowerVS CAPI provider.
This has no included code written over two weeks.