-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VMs should allow being created without running #169
Comments
I would argue, that only the disks of a VM are stateful, not the VM itself. The same like with pods with persistent storage. Like with such pods the disk outlives the lifetime of the VM. Long term storage ( that is how I would call that), is a purely administrative task, the same like with pod definitions, daemon sets, ... They all only stay in the cluster for as long they have a task (something active) to fulfill. Anyway, if that will really happen, we also have to understand how etcd would react to that on scaled environments ( also have to check the kubernetes bugs regarding to etcd scalability), where all the active watches are probably much more busy to provide the data. I would really want to be sure that having something which is just for convenience is not hurting us. |
This is exactly the distinction between persistent and nonpersistent domains in libvirt - I remember talking to Michal about it and his wish to move to persistent domains (I've looked at it from Lago perspective). It's not directly only about storage, but also persisting the configuration of the VM. |
Yes, libvirt does extensive defaulting. Further we have, in one way or the other, to support all the special fields in the cluster wide representation of the VM (to not use something like saving the valuable added defaults only on the host). Therefore, the important bit here, would be more about providing a nice way of exporting the information from the cluster. I don't see the need to always keep it in. One way to keep the relevant bits is to add something like the Like the actual Pod or Deployment configuration is valuable, a VM configuration is valuable, so making sure that it can be retrieved is very important, but saving it for convenience does not seem to be so important from my perspective. Kubevirt should definitely not stand in your way to add something like this on top of it (if that would not be the case, I would have a problem with it too). You can save it in another TPR, in a directory, in a relational database, whatever you prefer. |
The benefit of having all defined VMs (regardless of it's state) in the cluster, as the same objects, would allow external components to discover all VMs. If the VM registry is outside of the cluster (outside of the cluster, or non vm objects) then it is not possible to just discover the cluster. Previously I was considering to keep it like we have it today - but the reason of being able to discover the cluster makes sense to me. |
/cc @michalskrivanek |
@fabiand - it is an interesting thought. In practice we're talking about thousands (or even tens of thousands) of VMs in a cluster. I would really like to know about etcd scalability before going there. |
@michalskrivanek I am not so worried about thus number, after all I'd expect that kube and thus etcd needs to handle a magnitude more of containers and related objects. And another point besides the discoverability. A user has the expectation to be able to shutdown a VM, a keep it shutdown. But right now IIUIC, the VM would be restarted. |
@fabiand DR would assume that it is replicated to all hosts. And overall the required CPU and bandwidth should be negligent enough to not affect the VMs. I'm not worried the bare etcd cannot handle that, but all components needs to scale along with it |
From my perspective you are then not only discovering what is part of the cluster but also what might be part of the cluster if one chooses to (which means in my understanding, it is not part of the cluster). I would prefer to only work with thinks in the cluster which meet one of two conditions:
That is where I would draw the line. Like with replication controller (which allows having count 0) there might be side effects of technical necessities, e.g. upgrading something, which you can 'exploit' for storing something for later use, but that is not why it is there in the first place. |
@fabiand @michalskrivanek the restart should be handled by #75 . It's just current code's limitation. |
OTOH we do have entities like ConfigMaps and and Secrets, which exist but don't consume resources. |
@fabiand right. They are special in the sense, that they are never active or consume resources. |
Had a discussion with @michalskrivanek about this, and if I understood him right, he was more interested in the direction of having the VM definition on the node, in case a central entity can't be reached. So that for example virt-handler can decide on the host alone if a VM should be restarted, and does not loose the VM definition just because the VM went down. @michalskrivanek correct me if I am wrong. |
yes. nothing more as a convenience in that case, for all other cases the only source of truth is elsewhere.It assumes the local definitions are complete at least from the host perspective. It doesn't have to have the cluster-related data, however they might be handy for DR. Alternatively that is not necessary either if the storage persists the whole VM spec |
@rmohr The VM definition itself does not - like a configmap - consume resources itself. It's only the domain which will be consuming resources. I could imagine that we can keep the domxml on the host in some cases. But this obviously opens up a range of problems related to in and out of sync and the ability to make decisions in the absence of network connectivity. |
Closing this issue in favor of #267 which also has the relevant proposal attached. |
Automated Versioning with Git Tags
README: add a list of CNI users
Fix tc-tbf burst value in bytes
Depending on the kernel/host configuration, the reported value may change. Update the detection code to deal with that. Signed-off-by: Francesco Romani <fromani@redhat.com>
Currently the assumption is that VMs are running as along as they are defined in the cluster. To stop them, the object needs to be removed.
This was choosen to map a pod's behavior.
But - Eventually it makes sense to allow stopped VMs in KubeVirt. The reason is that VMs are stateful, and thus their state outlives their life-cycle.
My suggestion: Allow VMs to be stopped and to allow to create them stopped. With such a change KubeVirt could also act as a VM store.
The text was updated successfully, but these errors were encountered: