-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
UPSTREAM: <carry>: openshift: Remove mutating logic from exist and set vm state in update #62
Conversation
Let's wait until |
I0723 18:15:36.972456 8674 autoscaler.go:461] [10s remaining] Waiting for cluster to reach original machine count of 6; currently have 7
I0723 18:15:40.069120 8674 autoscaler.go:461] [7s remaining] Waiting for cluster to reach original machine count of 6; currently have 7
I0723 18:15:43.166344 8674 autoscaler.go:461] [4s remaining] Waiting for cluster to reach original machine count of 6; currently have 7
I0723 18:15:46.264489 8674 autoscaler.go:461] [0s remaining] Waiting for cluster to reach original machine count of 6; currently have 7
• Failure [646.895 seconds]
[Feature:Machines][Serial] Autoscaler should
/tmp/tmp.eiofjseXFX/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/autoscaler/autoscaler.go:229
scale up and down [It]
/tmp/tmp.eiofjseXFX/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/autoscaler/autoscaler.go:230
Timed out after 540.000s.
Expected
<int>: 7
to equal
<int>: 6 /test e2e-azure-operator |
/test e2e-azure-operator |
…t vm state in update
@enxebre: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
label didn't get applied |
@enxebre: you cannot LGTM your own PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/lgtm |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: frobware The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@@ -283,45 +283,58 @@ func (s *Reconciler) Update(ctx context.Context) error { | |||
Message: machineCreationSucceedMessage, | |||
}) | |||
|
|||
vmState := v1beta1.VMState(*vm.ProvisioningState) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Original purpose of isVMExists
was to decide if vm object exists. After the vm object was successfully retrieved, Exists
went through additional checking. Exists was true
only when vm was in VMStateSucceeded
or VMStateUpdating
state. Though, after isVMExists
returned true, both s.scope.MachineStatus.VMID and s.scope.MachineStatus.VMState were set. Given that your PR got merged, the workflow of setting VMID
and VMState
is different now. Exists
still returns false
/true
the same way. However, s.scope.MachineStatus.VMID
and s.scope.MachineStatus.VMState
will not be set until Update
op is called.s.scope.MachineStatus.VMState
will always be set to Succeeded
or Failed
. It will be empty for the first 40s. The whole purpose of that field was to report in which state during provisioning a vm is. Now it's broken.
Remove mutating logic from Exist(). Mutating logic on exists is semantically wrong as well as it leads to stale objects api update issues, given the machine controller workflow design. The machine controller workflow always heads towards
if exist then update
any mutating logic other than status should preferably belong toupdate
so it will become eventually consistent.Follow up for #61