Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Vm controller unit tests: fix return args #11322

Closed
wants to merge 3 commits into from

Conversation

xpivarc
Copy link
Member

@xpivarc xpivarc commented Feb 21, 2024

What this PR does

This PR fixes unit tests that were returning the wrong VM in Update operation. This reveals that some assumptions were wrong around the volume requests and so they are fixed as well.

Fixes #

Why we need it and why it was done in this way

Special notes for your reviewer

Release note

NONE

@kubevirt-bot
Copy link
Contributor

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@kubevirt-bot kubevirt-bot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. release-note-none Denotes a PR that doesn't merit a release note. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. labels Feb 21, 2024
@xpivarc
Copy link
Member Author

xpivarc commented Feb 21, 2024

/retest pull-kubevirt-unit-test

@kubevirt-bot
Copy link
Contributor

@xpivarc: The /retest command does not accept any targets.
The following commands are available to trigger required jobs:

  • /test pull-kubevirt-apidocs
  • /test pull-kubevirt-build
  • /test pull-kubevirt-build-arm64
  • /test pull-kubevirt-check-unassigned-tests
  • /test pull-kubevirt-client-python
  • /test pull-kubevirt-code-lint
  • /test pull-kubevirt-e2e-k8s-1.27-sig-compute
  • /test pull-kubevirt-e2e-k8s-1.27-sig-network
  • /test pull-kubevirt-e2e-k8s-1.27-sig-operator
  • /test pull-kubevirt-e2e-k8s-1.27-sig-performance
  • /test pull-kubevirt-e2e-k8s-1.27-sig-storage
  • /test pull-kubevirt-e2e-k8s-1.28-sig-compute
  • /test pull-kubevirt-e2e-k8s-1.28-sig-network
  • /test pull-kubevirt-e2e-k8s-1.28-sig-operator
  • /test pull-kubevirt-e2e-k8s-1.28-sig-storage
  • /test pull-kubevirt-e2e-k8s-1.29-ipv6-sig-network
  • /test pull-kubevirt-e2e-k8s-1.29-sig-compute
  • /test pull-kubevirt-e2e-k8s-1.29-sig-compute-migrations
  • /test pull-kubevirt-e2e-k8s-1.29-sig-network
  • /test pull-kubevirt-e2e-k8s-1.29-sig-operator
  • /test pull-kubevirt-e2e-k8s-1.29-sig-storage
  • /test pull-kubevirt-e2e-kind-1.27-vgpu
  • /test pull-kubevirt-e2e-kind-sriov
  • /test pull-kubevirt-e2e-windows2016
  • /test pull-kubevirt-fossa
  • /test pull-kubevirt-generate
  • /test pull-kubevirt-manifests
  • /test pull-kubevirt-prom-rules-verify
  • /test pull-kubevirt-unit-test
  • /test pull-kubevirt-verify-go-mod

The following commands are available to trigger optional jobs:

  • /test build-kubevirt-builder
  • /test pull-kubevirt-build-s390x
  • /test pull-kubevirt-check-tests-for-flakes
  • /test pull-kubevirt-conformance-arm64
  • /test pull-kubevirt-e2e-arm64
  • /test pull-kubevirt-e2e-k8s-1.29-sig-compute-realtime
  • /test pull-kubevirt-e2e-k8s-1.29-sig-compute-root
  • /test pull-kubevirt-e2e-k8s-1.29-sig-monitoring
  • /test pull-kubevirt-e2e-k8s-1.29-sig-network-multus-v4
  • /test pull-kubevirt-e2e-k8s-1.29-sig-storage-root
  • /test pull-kubevirt-e2e-k8s-1.29-single-node
  • /test pull-kubevirt-e2e-k8s-1.29-swap-enabled
  • /test pull-kubevirt-gosec
  • /test pull-kubevirt-goveralls
  • /test pull-kubevirt-metrics-lint
  • /test pull-kubevirt-unit-test-arm64
  • /test pull-kubevirt-verify-rpms

Use /test all to run the following jobs that were automatically triggered:

  • pull-kubevirt-apidocs
  • pull-kubevirt-build
  • pull-kubevirt-build-arm64
  • pull-kubevirt-check-tests-for-flakes
  • pull-kubevirt-check-unassigned-tests
  • pull-kubevirt-client-python
  • pull-kubevirt-code-lint
  • pull-kubevirt-conformance-arm64
  • pull-kubevirt-e2e-arm64
  • pull-kubevirt-e2e-k8s-1.27-sig-performance
  • pull-kubevirt-e2e-k8s-1.29-sig-compute
  • pull-kubevirt-e2e-k8s-1.29-sig-compute-migrations
  • pull-kubevirt-e2e-k8s-1.29-sig-network
  • pull-kubevirt-e2e-k8s-1.29-sig-operator
  • pull-kubevirt-e2e-k8s-1.29-sig-storage
  • pull-kubevirt-fossa
  • pull-kubevirt-generate
  • pull-kubevirt-goveralls
  • pull-kubevirt-manifests
  • pull-kubevirt-prom-rules-verify
  • pull-kubevirt-unit-test
  • pull-kubevirt-unit-test-arm64
  • pull-kubevirt-verify-go-mod

In response to this:

/retest pull-kubevirt-unit-test

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@xpivarc
Copy link
Member Author

xpivarc commented Feb 21, 2024

/cc @awels @akalenyu

@xpivarc
Copy link
Member Author

xpivarc commented Feb 21, 2024

/test pull-kubevirt-unit-test

@xpivarc
Copy link
Member Author

xpivarc commented Feb 21, 2024

I do wonder why VirtualMachine One valid VirtualMachine controller given should hotplug a vm that is running and friends expect to see the request. @awels @davidvossel would you be able to answer?

@awels
Copy link
Member

awels commented Feb 21, 2024

So the way the mechanism works is the following:

  1. The sub resources gets a hot(un)plug request.
  2. It validates the request, and if it passes, it modifies the VM status volume requests.
  3. Virt controller processes this list and modifies the VM spec to indicate the change.
  4. If the VM is active, it then modifies the VMI spec.

So you see the volume request in step 2. That looks like what is failing in the test.

@xpivarc
Copy link
Member Author

xpivarc commented Feb 21, 2024

So the way the mechanism works is the following:

1. The sub resources gets a hot(un)plug request.

2. It validates the request, and if it passes, it modifies the VM status volume requests.

3. Virt controller processes this list and modifies the VM spec to indicate the change.

4. If the VM is active, it then modifies the VMI spec.

So you see the volume request in step 2. That looks like what is failing in the test.

All the steps are performed but there is an assumption that the volume requests are not cleared. I do wonder why. This PR exposes that the assumption was wrong from the beginning. I wonder if I can simply remove it from a unit test. WDYT?

Note: See // vol request shouldn't be cleared until update status observes the new volume change

@Barakmor1
Copy link
Member

/cc

A real Kubernetes API returns the modified VM.

Signed-off-by: Luboslav Pivarc <lpivarc@redhat.com>
Based on strategy a VMI creation is expected

Signed-off-by: Luboslav Pivarc <lpivarc@redhat.com>
@xpivarc xpivarc force-pushed the vm-controller-unit-return-args branch from 4b92721 to 3b2c3e2 Compare February 29, 2024 12:21
@xpivarc
Copy link
Member Author

xpivarc commented Feb 29, 2024

/hold
Depends on #11372

@xpivarc xpivarc marked this pull request as ready for review February 29, 2024 12:22
@kubevirt-bot kubevirt-bot added do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. and removed do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. labels Feb 29, 2024
Copy link
Contributor

@jean-edouard jean-edouard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/approve
That's amazing, thank you! Just a couple unimportant nits in-line.
#11372 is included here, on purpose I assume, a rebase will probably be needed once that merges.

Comment on lines +461 to +464
vmInterface.EXPECT().Update(context.Background(), gomock.Any()).DoAndReturn(func(ctx context.Context, vm *virtv1.VirtualMachine) (*virtv1.VirtualMachine, error) {
Expect(vm.Spec.Template.Spec.Volumes[0].Name).To(Equal("vol1"))
return vm, nil
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: no need to use DoAndReturn() when no return value has to be computed. Just fixing the func arg in the Do() above would look better IMO. More instances of that present in the PR.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean using Do and Return? This doesn't work because I still want to return the updated vm and I can only do it with DoAndReturn. Did I miss anything?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, my bad

@kubevirt-bot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: jean-edouard

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@kubevirt-bot kubevirt-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Feb 29, 2024
Copy link
Member

@0xFelix 0xFelix left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only nits, looks good to me otherwise

pkg/virt-controller/watch/vm_test.go Outdated Show resolved Hide resolved
Comment on lines +461 to +464
vmInterface.EXPECT().Update(context.Background(), gomock.Any()).DoAndReturn(func(ctx context.Context, vm *virtv1.VirtualMachine) (*virtv1.VirtualMachine, error) {
Expect(vm.Spec.Template.Spec.Volumes[0].Name).To(Equal("vol1"))
return vm, nil
})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

pkg/virt-controller/watch/vm_test.go Outdated Show resolved Hide resolved
pkg/virt-controller/watch/vm_test.go Outdated Show resolved Hide resolved
@xpivarc xpivarc force-pushed the vm-controller-unit-return-args branch from 3b2c3e2 to 71f0042 Compare March 1, 2024 08:38
@xpivarc
Copy link
Member Author

xpivarc commented Mar 1, 2024

/hold Depends on #11372

/hold cancel

@kubevirt-bot kubevirt-bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Mar 1, 2024
1. Remove a wrong assumption where we expect requests
to not be removed in the same sync as we update the spec.

2. Cover more assumption, especially if update or hotplug
fails.

Signed-off-by: Luboslav Pivarc <lpivarc@redhat.com>
@xpivarc xpivarc force-pushed the vm-controller-unit-return-args branch from 71f0042 to 6a0bee7 Compare March 1, 2024 12:12
@xpivarc
Copy link
Member Author

xpivarc commented Mar 1, 2024

Fixed pull-kubevirt-code-lint

}

shouldExpectVMFinalizerRemoval := func(vm *virtv1.VirtualMachine) {
patch := fmt.Sprintf(`[{ "op": "test", "path": "/metadata/finalizers", "value": ["%s"] }, { "op": "replace", "path": "/metadata/finalizers", "value": [] }]`, virtv1.VirtualMachineControllerFinalizer)

vmInterface.EXPECT().Patch(context.Background(), vm.Name, types.JSONPatchType, []byte(patch), &metav1.PatchOptions{}).Return(vm, nil)
vmInterface.EXPECT().Patch(context.Background(), vm.Name, types.JSONPatchType, []byte(patch), &metav1.PatchOptions{}).DoAndReturn(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this now needed because of the addition of sanityExecute?
Previously this function was doing the right thing as it was just expecting a specific operation and not an actual object change.

Copy link
Member Author

@xpivarc xpivarc Mar 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, the change should be done by the client and the modified object should be returned. This is what happens if you are running the code against a real Kubernetes API server. Therefore, before it was wrong as we were returning a pointer to some VM, which on top of it was modified by the sync.

Note: once we switch to the fake client, this will be handled for us.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This wasn't doing any modifications. It was only expecting a patch. Yes, it did return a VM object, but the same one it got.
But now this actually modifies the object, so I'm a bit confused. Why is it the right thing to do?
I could expect the actual controller code to modify the object and not the test code.
What am I missing?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With the previous code, not only did the caller get back a pointer to a VM object that's actively used by the test code, but it also didn't include the requested patch.
Now we actually apply the patch (easy since we already know what it is) and return a pointer to an object that we don't rely on and that won't change over time.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jean-edouard I'm still confused... previous code only had to observe the patch - not to perform it.
and now we are performing the patch while there was something else in the controller that had to apply the patch.

Copy link
Member Author

@xpivarc xpivarc Mar 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jean-edouard +100.
The fact that we did nothing in the past is wrong, it does not reflect what happens if you run the code against real API.

Note: This can be still wrong but it is at least slightly better. We need #11297 to be as close as we can to the real behavior.

EDIT: I am happy to wait for #11297

Copy link
Member

@vladikr vladikr Mar 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

perhaps it's better to wait.

The fact that we did nothing in the past is wrong, it does not reflect what happens if you run the code against real API.

Mainly because it's not a real API.
It's a unit test; All we needed to do was to observe the call being made.
I would expect complex flows to be tested in a functional test.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would expect complex flows to be tested in a functional test.

If our "unit tests" really were unit tests (they're already much more than that) and if functional test were cheap (they're not) then I would 100% agree with you.
However, in the current state of things, I think we should always prefer unit tests when possible, no matter how complex.
Just my opinion, not really related to this PR.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that controllers are difficult to test. There must be a balance between how much you can mock and the effectiveness of the test.
How do we know that the fake client behaves the same way as a real client?
Do we really need to play a catchup game between the fake client and the real one?

We are responsible for the code in our controller and this is what we should test in the unit tests (whatever they are).

A unit test or integration test will never be a substitute for a functional test that exercises the functionality altogether.
Paying "cheap" for a test will make us pay a much higher price later.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do we know that the fake client behaves the same way as a real client?
Do we really need to play a catchup game between the fake client and the real one?

This is standard tooling from Kubernetes. We are not going to play any catch up as long as we don't want to test specific scenarios/bugs.

We are responsible for the code in our controller and this is what we should test in the unit tests (whatever they are).

Right but we should make sure we are testing realistic scenarios. For example, right now we don't. The patch should return an object that is only adjusted by the changes expressed in the request but until now we would return an object that contains every change or non (depending if we are looking at the code before or after fixing the cache corruption).

A unit test or integration test will never be a substitute for a functional test that exercises the functionality altogether.
Paying "cheap" for a test will make us pay a much higher price later.

+100, we should not remove functional test unless they are duplicates.

@vladikr
Copy link
Member

vladikr commented Mar 7, 2024

/lgtm

@kubevirt-bot kubevirt-bot added the lgtm Indicates that a PR is ready to be merged. label Mar 7, 2024
@kubevirt-commenter-bot
Copy link

Required labels detected, running phase 2 presubmits:
/test pull-kubevirt-e2e-windows2016
/test pull-kubevirt-e2e-kind-1.27-vgpu
/test pull-kubevirt-e2e-kind-sriov
/test pull-kubevirt-e2e-k8s-1.29-ipv6-sig-network
/test pull-kubevirt-e2e-k8s-1.27-sig-network
/test pull-kubevirt-e2e-k8s-1.27-sig-storage
/test pull-kubevirt-e2e-k8s-1.27-sig-compute
/test pull-kubevirt-e2e-k8s-1.27-sig-operator
/test pull-kubevirt-e2e-k8s-1.28-sig-network
/test pull-kubevirt-e2e-k8s-1.28-sig-storage
/test pull-kubevirt-e2e-k8s-1.28-sig-compute
/test pull-kubevirt-e2e-k8s-1.28-sig-operator

@vladikr
Copy link
Member

vladikr commented Mar 7, 2024

/hold
let me just go over this again.

@kubevirt-bot kubevirt-bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Mar 7, 2024
@kubevirt-bot
Copy link
Contributor

@xpivarc: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-kubevirt-e2e-k8s-1.29-sig-storage 6a0bee7 link true /test pull-kubevirt-e2e-k8s-1.29-sig-storage
pull-kubevirt-e2e-k8s-1.27-sig-compute 6a0bee7 link true /test pull-kubevirt-e2e-k8s-1.27-sig-compute

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@xpivarc
Copy link
Member Author

xpivarc commented Mar 25, 2024

/hold let me just go over this again.

@vladikr I would change this to be based on #11297

@kubevirt-bot
Copy link
Contributor

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@kubevirt-bot kubevirt-bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Apr 9, 2024
@xpivarc
Copy link
Member Author

xpivarc commented Apr 17, 2024

/close
#11726

@kubevirt-bot
Copy link
Contributor

@xpivarc: Closed this PR.

In response to this:

/close
#11726

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. lgtm Indicates that a PR is ready to be merged. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. release-note-none Denotes a PR that doesn't merit a release note. size/L
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants