Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.Sign up
VirtualMachineReplicaSet to scale VMs #453
Initial simple implementation of a VirtualMachineReplicaSet.
apiVersion: kubevirt.io/v1alpha1 kind: VirtualMachineReplicaSet metadata: name: myrs spec: replicas: 3 selector: matchLabels: myselector: myselector template: metadata: name: test labels: myselector: myselector spec: domain: devices: consoles: - type: pty memory: unit: MB value: 64 os: type: os: hvm type: qemu
fabiand left a comment
I.e. if VMs could be "attached" to pods. In such a case I was just wondering if we could get around of implementing a custom workload type.
@rmohr did you consider such an approach?
@fabiand yes i considered reusing it. It is not yet fully from the table. So this can also be seen as a POC for the completely layered approach. Regarding to not having our own cluster level object, I think no matter if we reuse ReplicaSets, or not, we need that.
@rmohr I agree that a having a VM specific cluster object to represent the replica set makes sense. I also like the idea of calling it a VirtualMachineReplicaSet. That naming convention makes it pretty obvious what the object does within the context of k8s.
I'm still contemplating the approach of implementing our own replication logic instead of leveraging the k8s controllers to do this for us.
For the sake of argument, how would it work if we re-used the k8s replica set behind the scenes? Below are my thoughts on how I'd approach it..
Basically, with this design the VMRS controller would just do these things.
Thoughts on the Pros/Cons of something like what I've outlined above vs rolling out our own replication logic?
1 similar comment
2 similar comments
Updating resources with KubeVirtClient failed because of the missing resource name. Signed-off-by: Roman Mohr <firstname.lastname@example.org>
Make sure the VirtualMachineReplicaSetList implements the List interface from k8s, to allow listing inside the k8s code. Signed-off-by: Roman Mohr <email@example.com>
VirtualMachineReplicaSet will calcualte the number of missing VMs and will invoke parallel create and delete calls to the apiserver.
Start with a replica count of zero, scale a few times up and down and finally return to zero replicas
Right now almost the complete event update handlers for the VirtualMachineReplicaSet consist of equal code. Add a helper and reuse it in the handlers. Signed-off-by: Roman Mohr <firstname.lastname@example.org>
In order to avoid difficulties when running the functional tests use VMs with ephemerals and independent disks for the created VM sets. Signed-off-by: Roman Mohr <email@example.com>
Live migrations are an area where the VM replica set use case diverges from the container use case.
I don't think we have to have live migration completely solved within the context of replica sets before this work can be merged, but it would be great if we had a general idea of how we plan to approach that problem. @rmohr Maybe you could put your thoughts on this topic in the proposal?
Don't set the replica count after successful CRUD operations. Instead wait for the cache to provide the right values, to avoid value inconsistencies. Signed-off-by: Roman Mohr <firstname.lastname@example.org>
Add a mock workqueue, which allows waiting for an expected number enqueues. This allows synchronous testing of the controller. The typical pattern is: mockQueue.ExpectAdd(3) vmSource.Add(vm) vmSource.Add(vm1) vmSource.Add(vm2) mockQueue.Wait() This ensoures that informer callbacks which are listening on vmSource enqueued three times an object. Since enqueing is typically the last action in listener callbacks, we can assume that the wanted scenario for a controller is set up, and an execution will process this scenario. Signed-off-by: Roman Mohr <email@example.com>
VirtualMachineReplicaSet tests contain a mock controller, which did not check if all invocations were looking right. Adding the final check in TearDown and adjusting the tests. Signed-off-by: Roman Mohr <firstname.lastname@example.org>
Clarify, that the VirtualMachineReplicaSet, does not guarantee that there will ever only be the amount of specified replicas in the cluster. Add description of the readyReplicas status field. Signed-off-by: Roman Mohr <email@example.com>
Update the proposal with justifications, why a reimplementation was chosen over wrapping around the Kubernetes ReplicaSet. Signed-off-by: Roman Mohr <firstname.lastname@example.org>
Instead of counting the number of invocations of a mock method in a callback, just specify the expected number of invocations. This removes a race condition error in these tests. Signed-off-by: Roman Mohr <email@example.com>
@davidvossel added the explanation how I plan that we support live migrations for all types of controllers to the proposal. If we follow the currently layed out architecture (independent of the new discussion on the mailing list https://groups.google.com/forum/#!topic/kubevirt-dev/4XBlPPEcIq4), the live migration story is in my opinion solid an solved, or am I missing something?
Make sure that the VirtualMachineTemplate lables are matched by the VirtualMachineReplicaSet. If they don't match, log it and ignore the ReplicaSet. Signed-off-by: Roman Mohr <firstname.lastname@example.org>
Really great work Roman! This is solid.
Code and testing wise, everything looks in order to me at this point. I'm going to wait until we conclude the controller naming discussion before marking "approved"
If you want to merge and continue the naming discussion that's fine - I don't know that you have to do them in order as long as you think you can resolve that concern before you get locked into supporting the API…
On Tue, Sep 26, 2017 at 1:22 PM, David Vossel ***@***.***> wrote: @davidvossel <https://github.com/davidvossel> added the tests for the expectations apart from the the discussion if we should keep the name for the controller, I think everything which should be here right now is there. Really great work Roman! This is solid. Code and testing wise, everything looks in order to me at this point. I'm going to wait until we conclude the controller naming discussion before marking "approved" — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#453 (comment)>, or mute the thread <https://github.com/notifications/unsubscribe-auth/ABG_p1r5aRaezvLJ-0qFE1rFaVX2SJWkks5smTLRgaJpZM4PcKXf> .
davidvossel left a comment
@rmohr you rock man!
As for the naming discussion. I caught up with Roman a bit out of band. We both agree that VMRS may diverge slightly from k8s RS, but the key here is VMRS will always (for the foreseeable future) work with ephemeral VMs.
So, as long as VMRSs are managing ephemeral VMs, I feel comfortable drawing the comparison to the k8s RS.