Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug 1838504: [vSphere] Fail machine on invalid provider spec values #593

Merged
merged 1 commit into from May 21, 2020

Conversation

alexander-demicev
Copy link
Contributor

A follow-up for #585

PR adds similar logic for datastores and folder + unit test coverage increase

Comment on lines 655 to 687
var multipleFoundError *find.MultipleFoundError
var notFoundError *find.NotFoundError
if errors.As(vsphereError, &multipleFoundError) {
return machinecontroller.InvalidMachineConfiguration(multipleFoundMsg)
}

if errors.As(vsphereError, &notFoundError) {
return machinecontroller.InvalidMachineConfiguration(notFoundMsg)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
var multipleFoundError *find.MultipleFoundError
var notFoundError *find.NotFoundError
if errors.As(vsphereError, &multipleFoundError) {
return machinecontroller.InvalidMachineConfiguration(multipleFoundMsg)
}
if errors.As(vsphereError, &notFoundError) {
return machinecontroller.InvalidMachineConfiguration(notFoundMsg)
}
var multipleFoundError *find.MultipleFoundError
if errors.As(vsphereError, &multipleFoundError) {
return machinecontroller.InvalidMachineConfiguration(multipleFoundMsg)
}
var notFoundError *find.NotFoundError
if errors.As(vsphereError, &notFoundError) {
return machinecontroller.InvalidMachineConfiguration(notFoundMsg)
}

@@ -95,130 +96,234 @@ func TestClone(t *testing.T) {
},
}

// Set this value to 2 because it's the default number on machine in mocked environment
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// Set this value to 2 because it's the default number on machine in mocked environment
// Set this value to 2 because it's the default number of machines in a mocked environment

testCase string
cloneVM bool
expectedError error
setupFailureCondition func()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be better to return an error from this func and then t.Fatal, at the moment you are calling t.Fatal on the whole test t, not on the subtest t

Comment on lines 321 to 325
vmsCount++
if vmsCount != len(vms) {
t.Errorf("Unexpected number of machines. Expected: %v, got: %v", vmsCount, len(vms))
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks a bit odd, should there be a loop or something here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you mean by loop? That part of test checks that instance was created after calling clone() by comparing the number of instances before and after the call.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the loop here is the for _, tc := range testCases, I wonder if there is a way to check that the new VM was created without having a count specifically? This vmsCount is shared across multiple subtests, so if one of them fails for some reason, that could have a knock on effect for the other subtests, so we would potentially have difficulty tracking down the problem. Ideally each subtest should share as little as possible/nothing if possible, so there isn't dependencies between them.

Does clone return the newly created VM? Could we check that exists in the VMList instead?

@@ -677,6 +675,20 @@ func setProviderStatus(taskRef string, condition vspherev1.VSphereMachineProvide
return nil
}

func handleVsphereError(multipleFoundMsg, notFoundMsg string, defaultError, vsphereError error) error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think elsewhere this is capitalised as VSphere, do we have a pattern or is it mixed everywhere?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right, it should be VSphere

Comment on lines 321 to 325
vmsCount++
if vmsCount != len(vms) {
t.Errorf("Unexpected number of machines. Expected: %v, got: %v", vmsCount, len(vms))
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the loop here is the for _, tc := range testCases, I wonder if there is a way to check that the new VM was created without having a count specifically? This vmsCount is shared across multiple subtests, so if one of them fails for some reason, that could have a knock on effect for the other subtests, so we would potentially have difficulty tracking down the problem. Ideally each subtest should share as little as possible/nothing if possible, so there isn't dependencies between them.

Does clone return the newly created VM? Could we check that exists in the VMList instead?

@alexander-demicev
Copy link
Contributor Author

@JoelSpeed I updated the unit test. Instead of counting VMs it now checks that returned value is not empty, which should happen only if clone task was started successfully.

@alexander-demicev alexander-demicev changed the title [vSphere] Fail machine on invalid provider spec values Bug 1838504 - [vSphere] Fail machine on invalid provider spec values May 21, 2020
Copy link
Contributor

@JoelSpeed JoelSpeed left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/approve

Thanks

@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: JoelSpeed

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label May 21, 2020
expectedError: errors.New("multiple resource pools found, specify one in config"),
setupFailureCondition: func() error {
// Create resource pools
defaultResourcePool, err := session.Finder.ResourcePool(context.Background(), "/DC0/host/DC0_C0/Resources")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this magic value "/DC0/host/DC0_C0/Resources" completely arbitrary? may be add a comment explaining where is it coming from?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's not a magic value, just the default value that 'simulator' has set. I had to debug this a bit.

@enxebre
Copy link
Member

enxebre commented May 21, 2020

awesome @alexander-demichev, thanks!
/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label May 21, 2020
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@enxebre
Copy link
Member

enxebre commented May 21, 2020

/retitle Bug 1838504: [vSphere] Fail machine on invalid provider spec values

@enxebre
Copy link
Member

enxebre commented May 21, 2020

/bugzilla refresh

@openshift-ci-robot
Copy link
Contributor

@enxebre: No Bugzilla bug is referenced in the title of this pull request.
To reference a bug, add 'Bug XXX:' to the title of this pull request and request another bug refresh with /bugzilla refresh.

In response to this:

/bugzilla refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot openshift-ci-robot changed the title Bug 1838504 - [vSphere] Fail machine on invalid provider spec values Bug 1838504: [vSphere] Fail machine on invalid provider spec values May 21, 2020
@openshift-ci-robot openshift-ci-robot added the bugzilla/severity-medium Referenced Bugzilla bug's severity is medium for the branch this PR is targeting. label May 21, 2020
@openshift-ci-robot
Copy link
Contributor

@alexander-demichev: This pull request references Bugzilla bug 1838504, which is invalid:

  • expected the bug to target the "4.5.0" release, but it targets "---" instead

Comment /bugzilla refresh to re-evaluate validity if changes to the Bugzilla bug are made, or edit the title of this pull request to link to a different bug.

In response to this:

Bug 1838504: [vSphere] Fail machine on invalid provider spec values

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot openshift-ci-robot added the bugzilla/invalid-bug Indicates that a referenced Bugzilla bug is invalid for the branch this PR is targeting. label May 21, 2020
@enxebre
Copy link
Member

enxebre commented May 21, 2020

/bugzilla refresh

@openshift-ci-robot
Copy link
Contributor

@enxebre: This pull request references Bugzilla bug 1838504, which is invalid:

  • expected the bug to target the "4.5.0" release, but it targets "---" instead

Comment /bugzilla refresh to re-evaluate validity if changes to the Bugzilla bug are made, or edit the title of this pull request to link to a different bug.

In response to this:

/bugzilla refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@enxebre
Copy link
Member

enxebre commented May 21, 2020

/bugzilla refres

@enxebre
Copy link
Member

enxebre commented May 21, 2020

/bugzilla refresh

@openshift-ci-robot openshift-ci-robot added the bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. label May 21, 2020
@openshift-ci-robot
Copy link
Contributor

@enxebre: This pull request references Bugzilla bug 1838504, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target release (4.5.0) matches configured target release for branch (4.5.0)
  • bug is in the state NEW, which is one of the valid states (NEW, ASSIGNED, ON_DEV, POST, POST)

In response to this:

/bugzilla refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot openshift-ci-robot removed the bugzilla/invalid-bug Indicates that a referenced Bugzilla bug is invalid for the branch this PR is targeting. label May 21, 2020
@openshift-merge-robot openshift-merge-robot merged commit 6dd4c3b into openshift:master May 21, 2020
@openshift-ci-robot
Copy link
Contributor

@alexander-demichev: All pull requests linked via external trackers have merged: openshift/machine-api-operator#593. Bugzilla bug 1838504 has been moved to the MODIFIED state.

In response to this:

Bug 1838504: [vSphere] Fail machine on invalid provider spec values

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@alexander-demicev alexander-demicev deleted the errors branch May 21, 2020 19:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. bugzilla/severity-medium Referenced Bugzilla bug's severity is medium for the branch this PR is targeting. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants