Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upVisually represent DISK IMAGE SNAPSHOT dependency for Ceph datastores which "ALLOW_ORPHANS" #2052
Comments
This comment has been minimized.
This comment has been minimized.
+1 |
juanmont
added
Status: Pending
Category: CLI
Community
Category: Drivers - Storage
Category: Sunstone
Type: Bug
labels
May 3, 2018
vholer
added this to the Release 5.6.2 milestone
Sep 27, 2018
vholer
added
Sponsored
Status: Accepted
and removed
Community
Status: Pending
labels
Sep 27, 2018
This comment has been minimized.
This comment has been minimized.
I can see a problem. It looks like that having tree and flat (list) only ONE snapshots inventory doesn't match how CEPH snapshots are managed. They are kind of both, e.g. if I have 5 snapshots, revert to snapshot ID=2, and do another 5 snapshots, the hierarchy might look like:
Rescheduling the issue. |
rsmontero
modified the milestones:
Release 5.6.2,
Release 5.6.3
Oct 16, 2018
added a commit
to christian7007/one
that referenced
this issue
Dec 13, 2018
This comment has been minimized.
This comment has been minimized.
MedicMomcilo
commented
Dec 14, 2018
Please also note that this means that if you ever revert to a earlier snapshot it becomes unremovable. |
added a commit
to christian7007/one
that referenced
this issue
Dec 14, 2018
christian7007
self-assigned this
Dec 20, 2018
added a commit
that referenced
this issue
Dec 24, 2018
OpenNebulaSupport
closed this
Jan 3, 2019
OpenNebulaSupport
modified the milestones:
Release 5.6.3,
Release 5.8
Jan 3, 2019
This comment has been minimized.
This comment has been minimized.
jsfrerot
commented
Jan 15, 2019
To reply to MedicMomcilo, not being able to remove a snapshot your reverted to prevents you from resizing the volume for that VM. I ran into this problem, and the only way I was able to fix this issue is to manually flatten the working volume with ceph commands, then I would be able to remove the snapshot and then resize the disk with opennebula. Should I report a new bug with this information ? |
hydro-b commentedMay 2, 2018
Bug Report
Version of OpenNebula
Component
Description
With storage systems that allow "ALLOW_ORPHANS=YES" like Ceph there is no indication for the user that there are dependencies between snapshots, like with "qcow2" for example. When you never "revert" to a VM DISK IMAGE snapshot there is indeed no dependency. But, if at some point in time you revert to a snapshot, and later on make snapshots based on that, there are dependencies.
Expected Behavior
Visually represent snapshot dependencies when they occur, "tree" wise representation like with qcow2 snapshots.
Actual Behavior
Snapshots seem to be independent, but in fact are not.
How to reproduce
ONE CLI POINT OF VIEW
VM DISKS
ID DATASTORE TARGET IMAGE SIZE TYPE SAVE
0 BITED-1526 sda stefan-snapshot-issue-test01 -/10G rbd YES
1 - hda CONTEXT -/- - -
CEPH "rbd ls" POINT OF VIEW
one-596 10240M 2 excl
ONE CLI POINT OF VIEW
VM DISKS
ID DATASTORE TARGET IMAGE SIZE TYPE SAVE
0 BITED-1526 sda stefan-snapshot-issue-test01 -/10G rbd YES
1 - hda CONTEXT -/- - -
VM DISK SNAPSHOTS
AC ID DISK PARENT DATE SIZE NAME
=> 0 0 -1 05/01 16:13:24 -/10G snap1
CEPH "rbd ls" POINT OF VIEW
one-596 10240M 2 excl
one-596@0 10240M 2 yes
ONE CLI POINT OF VIEW
VM DISKS
ID DATASTORE TARGET IMAGE SIZE TYPE SAVE
0 BITED-1526 sda stefan-snapshot-issue-test01 -/10G rbd YES
1 - hda CONTEXT -/- - -
VM DISK SNAPSHOTS
AC ID DISK PARENT DATE SIZE NAME
0 0 -1 05/01 16:13:24 -/10G snap1
=> 1 0 -1 05/01 16:14:53 -/10G snap2
CEPH "rbd ls" POINT OF VIEW
one-596 10240M 2 excl
one-596@0 10240M 2 yes
one-596@1 10240M 2 yes
ONE CLI POINT OF VIEW
VM DISKS
ID DATASTORE TARGET IMAGE SIZE TYPE SAVE
0 BITED-1526 sda stefan-snapshot-issue-test01 -/10G rbd YES
1 - hda CONTEXT -/- - -
VM DISK SNAPSHOTS
AC ID DISK PARENT DATE SIZE NAME
=> 0 0 -1 05/01 16:13:24 -/10G snap1
1 0 -1 05/01 16:14:53 -/10G snap2
CEPH "rbd ls" POINT OF VIEW
one-596 10240M BITED-152641-TESTCLOUD/one-596-0:1@0 2
one-596-0:1 10240M 2
one-596-0:1@0 10240M 2 yes
one-596-0:1@1 10240M 2 yes
ONE CLI POINT OF VIEW
VM DISKS
ID DATASTORE TARGET IMAGE SIZE TYPE SAVE
0 BITED-1526 sda stefan-snapshot-issue-test01 -/10G rbd YES
1 - hda CONTEXT -/- - -
VM DISK SNAPSHOTS
AC ID DISK PARENT DATE SIZE NAME
=> 0 0 -1 05/01 16:13:24 -/10G snap1
CEPH "rbd ls" POINT OF VIEW
one-596 10240M BITED-152641-TESTCLOUD/one-596-0@0 2 excl
one-596-0 10240M 2
one-596-0@0 10240M 2 yes
ONE CLI POINT OF VIEW
VM DISKS
ID DATASTORE TARGET IMAGE SIZE TYPE SAVE
0 BITED-1526 sda stefan-snapshot-issue-test01 -/10G rbd YES
1 - hda CONTEXT -/- - -
VM DISK SNAPSHOTS
AC ID DISK PARENT DATE SIZE NAME
0 0 -1 05/01 16:13:24 -/10G snap1
=> 1 0 -1 05/01 16:20:02 -/10G snap3
CEPH "rbd ls" POINT OF VIEW
one-596 10240M BITED-152641-TESTCLOUD/one-596-0@0 2 excl
one-596@1 10240M BITED-152641-TESTCLOUD/one-596-0@0 2 yes
one-596-0 10240M 2
one-596-0@0 10240M 2 yes
So at this point there is a "dependency" of the snapshots which is not visible in cli/sunstone. No error is logged when trying to delete the snapshot (other than "Error" in VM template).