-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug 1817853: Reconcile network addresses according to VM status #53
Bug 1817853: Reconcile network addresses according to VM status #53
Conversation
MOTIVATION This fix handles the network addresses reconciliation problems which leads to mishandling of status and phase reporting and lack of linkage, which causes the machine to apear in the wrong state. The main issue is around network reconciliation, and for that a major refactoring been made to better address this. See [1] below. RESULT While the VM is not up it still doesn't have any IP addresses and this should be reported as nil addresses. If the VM is UP it means we have to report error, and that will trigger a future reconcilliation. A change in the node object that, that happen when a VM goes down for example, triggers reconciliation and that should change the list of addresses and state accordingly. When the VM boots a reverse thing happens - the machine boots with no IP addresses, till the node object is detecting a change because kubelet starts responding. That triggers a reconcillation that will detect the addresses correctly. If not that should keep firing reconcillation till it does. MODIFICATION [1] structural changes: Instead of having pile of changes in one or two methods, every change to the status or spec has its own function. The high level break down of the any handling now is: - create or delete the vm. no-op on the vm for update - reconcile provider id - reconcile network addresses - reconcile annotations - reconcile providerStatus - update machine - update machine/status Signed-off-by: Roy Golan <rgolan@redhat.com>
@rgolangh: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
@rgolangh: This pull request references Bugzilla bug 1817853, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker. 3 validation(s) were run on this bug
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: bennyz, rgolangh The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@rgolangh: All pull requests linked via external trackers have merged: openshift/cluster-api-provider-ovirt#53. Bugzilla bug 1817853 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/cherry-pick release-4.5 |
@rgolangh: new pull request created: #54 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
MOTIVATION
This fix handles the network addresses reconciliation problems which leads to
mishandling of status and phase reporting and lack of linkage, which causes the
machine to apear in the wrong state.
The main issue is around network reconciliation, and for that a major refactoring been made
to better address this. See [1] below.
RESULT
While the VM is not up it still doesn't have any IP addresses and this should be reported as nil addresses.
If the VM is UP it means we have to report error, and that will trigger a future reconcilliation.
A change in the node object that, that happen when a VM goes down for example, triggers reconciliation
and that should change the list of addresses and state accordingly. When the VM boots a reverse thing
happens - the machine boots with no IP addresses, till the node object is detecting a change because
kubelet starts responding. That triggers a reconcillation that will detect the addresses correctly. If
not that should keep firing reconcillation till it does.
MODIFICATION
[1] structural changes:
Instead of having pile of changes in one or two methods, every change to
the status or spec has its own function.
The high level break down of the any handling now is:
Signed-off-by: Roy Golan rgolan@redhat.com