New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug 1838504: [vSphere] Fail machine on invalid provider spec values #593
Bug 1838504: [vSphere] Fail machine on invalid provider spec values #593
Conversation
pkg/controller/vsphere/reconciler.go
Outdated
var multipleFoundError *find.MultipleFoundError | ||
var notFoundError *find.NotFoundError | ||
if errors.As(vsphereError, &multipleFoundError) { | ||
return machinecontroller.InvalidMachineConfiguration(multipleFoundMsg) | ||
} | ||
|
||
if errors.As(vsphereError, ¬FoundError) { | ||
return machinecontroller.InvalidMachineConfiguration(notFoundMsg) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
var multipleFoundError *find.MultipleFoundError | |
var notFoundError *find.NotFoundError | |
if errors.As(vsphereError, &multipleFoundError) { | |
return machinecontroller.InvalidMachineConfiguration(multipleFoundMsg) | |
} | |
if errors.As(vsphereError, ¬FoundError) { | |
return machinecontroller.InvalidMachineConfiguration(notFoundMsg) | |
} | |
var multipleFoundError *find.MultipleFoundError | |
if errors.As(vsphereError, &multipleFoundError) { | |
return machinecontroller.InvalidMachineConfiguration(multipleFoundMsg) | |
} | |
var notFoundError *find.NotFoundError | |
if errors.As(vsphereError, ¬FoundError) { | |
return machinecontroller.InvalidMachineConfiguration(notFoundMsg) | |
} |
@@ -95,130 +96,234 @@ func TestClone(t *testing.T) { | |||
}, | |||
} | |||
|
|||
// Set this value to 2 because it's the default number on machine in mocked environment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Set this value to 2 because it's the default number on machine in mocked environment | |
// Set this value to 2 because it's the default number of machines in a mocked environment |
testCase string | ||
cloneVM bool | ||
expectedError error | ||
setupFailureCondition func() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be better to return an error from this func and then t.Fatal
, at the moment you are calling t.Fatal on the whole test t
, not on the subtest t
vmsCount++ | ||
if vmsCount != len(vms) { | ||
t.Errorf("Unexpected number of machines. Expected: %v, got: %v", vmsCount, len(vms)) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks a bit odd, should there be a loop or something here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean by loop
? That part of test checks that instance was created after calling clone()
by comparing the number of instances before and after the call.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the loop here is the for _, tc := range testCases
, I wonder if there is a way to check that the new VM was created without having a count specifically? This vmsCount
is shared across multiple subtests, so if one of them fails for some reason, that could have a knock on effect for the other subtests, so we would potentially have difficulty tracking down the problem. Ideally each subtest should share as little as possible/nothing if possible, so there isn't dependencies between them.
Does clone
return the newly created VM? Could we check that exists in the VMList instead?
pkg/controller/vsphere/reconciler.go
Outdated
@@ -677,6 +675,20 @@ func setProviderStatus(taskRef string, condition vspherev1.VSphereMachineProvide | |||
return nil | |||
} | |||
|
|||
func handleVsphereError(multipleFoundMsg, notFoundMsg string, defaultError, vsphereError error) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think elsewhere this is capitalised as VSphere
, do we have a pattern or is it mixed everywhere?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
right, it should be VSphere
vmsCount++ | ||
if vmsCount != len(vms) { | ||
t.Errorf("Unexpected number of machines. Expected: %v, got: %v", vmsCount, len(vms)) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the loop here is the for _, tc := range testCases
, I wonder if there is a way to check that the new VM was created without having a count specifically? This vmsCount
is shared across multiple subtests, so if one of them fails for some reason, that could have a knock on effect for the other subtests, so we would potentially have difficulty tracking down the problem. Ideally each subtest should share as little as possible/nothing if possible, so there isn't dependencies between them.
Does clone
return the newly created VM? Could we check that exists in the VMList instead?
@JoelSpeed I updated the unit test. Instead of counting VMs it now checks that returned value is not empty, which should happen only if clone task was started successfully. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/approve
Thanks
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: JoelSpeed The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
expectedError: errors.New("multiple resource pools found, specify one in config"), | ||
setupFailureCondition: func() error { | ||
// Create resource pools | ||
defaultResourcePool, err := session.Finder.ResourcePool(context.Background(), "/DC0/host/DC0_C0/Resources") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this magic value "/DC0/host/DC0_C0/Resources"
completely arbitrary? may be add a comment explaining where is it coming from?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's not a magic value, just the default value that 'simulator' has set. I had to debug this a bit.
awesome @alexander-demichev, thanks! |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retitle Bug 1838504: [vSphere] Fail machine on invalid provider spec values |
/bugzilla refresh |
@enxebre: No Bugzilla bug is referenced in the title of this pull request. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@alexander-demichev: This pull request references Bugzilla bug 1838504, which is invalid:
Comment In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/bugzilla refresh |
@enxebre: This pull request references Bugzilla bug 1838504, which is invalid:
Comment In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/bugzilla refres |
/bugzilla refresh |
@enxebre: This pull request references Bugzilla bug 1838504, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker. 3 validation(s) were run on this bug
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@alexander-demichev: All pull requests linked via external trackers have merged: openshift/machine-api-operator#593. Bugzilla bug 1838504 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
A follow-up for #585
PR adds similar logic for datastores and folder + unit test coverage increase