-
Notifications
You must be signed in to change notification settings - Fork 202
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug 1833256: [vSphere] Fail machine if multiple resource pools found #585
Bug 1833256: [vSphere] Fail machine if multiple resource pools found #585
Conversation
@alexander-demichev: This pull request references Bugzilla bug 1833256, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker. 3 validation(s) were run on this bug
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@@ -442,6 +442,11 @@ func clone(s *machineScope) (string, error) { | |||
|
|||
resourcepool, err := s.GetSession().Finder.ResourcePoolOrDefault(s, resourcepoolPath) | |||
if err != nil { | |||
var multipleFoundError *find.MultipleFoundError |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
before we return an error, as far as I can see we keep returning an error? How does this make any difference for the machine to be stuck provisioning?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
was this meant to return InvalidMachineConfiguration()
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you are right, changed it to InvalidMachineConfiguration()
b2587c4
to
0dba093
Compare
if errors.As(err, &multipleFoundError) { | ||
return "", machinecontroller.InvalidMachineConfiguration("multiple resource pools found, specify one in config") | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you describe user stories that make it legit to return the next line as not InvalidMachineConfiguration?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought about connection errors, when the controller can't reach vSphere endpoints.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there a ways to discriminate that and include it here?
0dba093
to
c05bfce
Compare
} | ||
|
||
if errors.As(err, ¬FoundError) { | ||
return "", machinecontroller.InvalidMachineConfiguration("resource pool not found, specify valid value") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we unit test this errors now?
we can move this to its own function if we need to.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wouldn't this be an issue for https://github.com/openshift/machine-api-operator/pull/585/files#diff-11b6610f82e5d47fdfb8dda44b90dd1eR435-R440
as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Creating and deleting resource pools in the mocked environment doesn't look easy. I'll add tests in a follow-up PR.
3fa1f70
to
4c661cc
Compare
pkg/controller/vsphere/reconciler.go
Outdated
@@ -442,6 +442,17 @@ func clone(s *machineScope) (string, error) { | |||
|
|||
resourcepool, err := s.GetSession().Finder.ResourcePoolOrDefault(s, resourcepoolPath) | |||
if err != nil { | |||
// TODO: move error checks to provider spec validation | |||
var multipleFoundError *find.MultipleFoundError | |||
var notFoundError *find.NotFoundError |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very minor nit, I would move this var declaration down to where it's used, grouping the vars declaration and usage makes the code slightly easier to follow
var multipleFoundError *find.MultipleFoundError
if errors.As(err, &multipleFoundError) {
return "", machinecontroller.InvalidMachineConfiguration("multiple resource pools found, specify one in config")
}
var notFoundError *find.NotFoundError
if errors.As(err, ¬FoundError) {
return "", machinecontroller.InvalidMachineConfiguration("resource pool not found, specify valid value")
}
4c661cc
to
ab2929a
Compare
/lgtm Changes look good, should the tests PR be under the same BZ? |
I don't think so but I'd like both PRs to get into 4.5 so we can avoid having BZs related to this kind of errors. |
Cool, in which case make sure you create a BZ for that one as well |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: enxebre The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest Please review the full test history for this PR and help us cut down flakes. |
5 similar comments
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
15 similar comments
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
@alexander-demichev: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/retest Please review the full test history for this PR and help us cut down flakes. |
@alexander-demichev: All pull requests linked via external trackers have merged: openshift/machine-api-operator#585. Bugzilla bug 1833256 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Our current implementation allows users to either specify
resourcePool
in provider spec or leave it empty and fallback to default one. In case when more than one resource pools found lookup fails and the machine is stuck inProvisioning
phase. In order to avoid machine getting stuck, this PR introduces machine failure on lookup error.I have issues with creating additional resource pools in the mocked environment so tests will come later.