-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Process all VMs from a pool #52
Comments
Hi, Best regards |
Well, actually it is possible to have a VM named "pool-1". So such prefix seems not to be a good idea. |
Hi,
|
For compatibility maintaining old format and introduce new format:
|
Look nice. :-) |
The pool is unique in the cluster. Is not necessary to specify host |
...but node is not unique to the pool. You might want to do backup on one node only. (All vm's on that node in a particular pool.) Actually this is reasonable. If you run cv4pve-autosnap in a cron on one node - everything goes fine until that node fails. Then your snapshots are not being created anymore, even through another nodes are still up. It is better idea to run cv4pve-autosnap separately on every node - so that every node takes care of its own VMs only. So you need to "snapshot all vm's in a given pool, on a specific node only". |
It is not necessary to install cv4pve-autosnap internally of node but externally this use API. Installation outside the cluster is preferred. |
Good point. :-) |
when you execute cv4pve-autosnap it is not important which node you run it on because it will snap the vms that are specified in the -vmid parameter, as the vms are in the custer (even if only one node). What you want is perhaps something different. Give me some examples. Best regards |
Execution is successful as long as it is executed at all. Imagine: there are 2 nodes (node1, node2). cv4pve-autosnap is installed on node1 and automatically creates snapshots of all vms in a pool. The snapshots are created on all nodes. Everything works. Now: Node1 goes down. Node2 still work, but snapshots are not created anymore. Solution: Install cv4pve-autosnap on all nodes, each of them to create snapshots on its own node only. This way snapshots on node2 are not affected by failure of node1. |
installing cv4pve-autosnap outside the cluster the problem does not exist. However, if in HA the vm are moved from one node to another what happens? That you would no longer snap. |
Yes, but that requires "the outside" not to fail. So we come back to the initial problem: one failure stops snapshots in whole cluster.
Why? After migration the vm will stay in the same pool, so no problem. Snapshots will be automatically made on the new node. (If the node make snapshots of "all my own vm's in the pool", then the newly migrated vm will be also snapshotted. And this is clou of the idea.) |
In the latest version you can specify the pool using: Best regards |
It would be good to process all VMs from a specified pool.
For example something like this:
cv4pve-autosnap --vmid="@Poolname"
The text was updated successfully, but these errors were encountered: