New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CLONE_TARGET not used for non-persistent images in shared and qcow2 TM_MADs #2791
Comments
I have a feeling that #2246 (comment) is related too... |
Another affected use-case is the live-migrate when there is Ceph SYSTEM DS and additional shared (NFS,Gluster, etc) IMAGE DS. The live-migrate should work when the shared TM_MAD has |
bump |
Hi Anton Yes CLONE_TARGET and similar parameters are not configuration parameters. This is, they will not modify the behavior of the driver in anyway. These parameters are used to let know oned how an specific driver perform a given operation. For example if CLONE_TARGET is self, oned knows that when that driver is used; it needs to take out space from the image datastore and not the system datastore; and so update datastore quotas and usage accordingly. Now, we have the hybrid modes, in this case we have CLONE_TARGET_SSH for example. This means that when an image datastore is used with a system datastore of type SSH instead of using CLONE_TARGET you need to use the CLONE_TARGET_SSH, because in this case CLONE operations are going to take storage space from the system datastore and not the image datastore. Hope is not more clear Cheers! |
Still not sure if this is a bug or feature or we can close it with the previous notes... I'll leave open (now we have a super-duper stale bot for the catch ;) |
Hi Ruben, So if the CLONE_* and LN_* variables are for accounting purposes (and for whitelisting the mixed modes) then there is some sort of inconsistency in the behaviour with the comment in
So the question is how to define the following case? I think it is not clear that when the VM is deployed the persistent root disk is not so persistent anymore because it is copied to the host's filesystem. IMHO In the same time for |
IMHO the admin should expect the TM_MAD to behave following the description - do nothing, clone/sumlink, copy... |
I don't see any inconsistency, the parameters determines how the driver perform the operations. These parameters, let say that way are for driver developers, so they can express how the driver works. May be we can change the sentence to: " ln_target : determines how the driver clones persistent images when a new VM is instantiated." May it is clearer that way.
It would be deployed in the hypervisor local storage (that's the main use case to improve I/O performance). Note that you will have the persistent semantics of SSH. A persistent disk will be copied back from system datastore to image datastore. So no matter the transfer mode, a persistent disk will be always persistent, although the specific semantics may change for different drivers (e.g. when the changes are committed to the original image). |
I beg you pardon but there is no "would" in this line :) So in the case of a Host failure the VM will lost all data when the VM is recheduled on another host. Because the image will be copied from the "shared" datastore reverting the VM at the state when the image was last saved(last undeploy). IMO the current implementation does not allow the admin to have true persistent image on the shared storage - yes, it is slower than local ssd, but the data is persistent and actual. Even on Host disk failures. Currently the description of the ln_target in this case is "ln_target: determinaes how the space is accounted by opennebula" I think that the code should be refactored at least to do symlink by default and do copy on admin/user consent - just like with the ceph tm-mad Sligthtly different but similar case is with the non-persistent images... In general, I as an admin would like the driver to follow the CLONE_TARGET as it is described in the oned configuration. When self do a copy on the shared storage and do symlink in SYSTEM ds, when system do copy the image to SYSTEM ds. When none in the case of shared it should rise error "unsupported configuration". Similar with LN_TARGET for the persistent images - when I set NONE - do symlink, when self do rise an error "unsupported by shared tm_mad" and only when system do copy the image to the system datastore. Hope this clear my concern on the subject :) Cheers, |
I understand, but the parameter is the other way around it does not change the driver behavior but states how the driver works. When using a shared storage combined with the ssh drivers the ssh drivers will be used. And the ssh driver works by copying the files to the remote storage. The ssh driver doesn't assume a shared storage and that must remain in the same way. If you need other behavior then we need a different driver set a fs_ssh for example (like fs_lvm that works in a similar way as you described, i.e. assuming a shared storage). |
So it is not a bug, it's a feature :) Closing the ticket. |
:) |
Labels are now natively displayed without applying any text transforms to them.
Labels are now natively displayed without applying any text transforms to them. (cherry picked from commit e4a26f9)
Description
Following the oned.conf documentation there are three modes described as follow:
To Reproduce
There are different and more complex use cases but I think the following is the simplest one
disk.X.snap/0
in the SYSTEM datastore)Expected behavior
A symlink to an image clone for the given datastore created in the IMAGE datastore should be created in the SYSTEM datastore.
Details
Additional context
Looks like that the behavior for the persistent images(LN_TARGET) is hard-coded too always do symlink (and always do a copy if SYSTEM TM_MAD=ssh)
The description of "Shared & Qcow2 Transfer Modes" in Open Cloud Storage Setup is also not clear when a symlink is created and when a file copy for bot persistent and non-persistent images.
I am not sure is a bug or expected behavior that need clarification.
Progress Status
The text was updated successfully, but these errors were encountered: