New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Visually represent DISK IMAGE SNAPSHOT dependency for Ceph datastores which "ALLOW_ORPHANS" #2052

Closed
hydro-b opened this Issue May 2, 2018 · 4 comments

Comments

@hydro-b
Copy link
Contributor

hydro-b commented May 2, 2018

Bug Report

Version of OpenNebula

  • 5.4.11

Component

  • Command Line Interface (CLI)
  • Storage & Images
  • Sunstone

Description

With storage systems that allow "ALLOW_ORPHANS=YES" like Ceph there is no indication for the user that there are dependencies between snapshots, like with "qcow2" for example. When you never "revert" to a VM DISK IMAGE snapshot there is indeed no dependency. But, if at some point in time you revert to a snapshot, and later on make snapshots based on that, there are dependencies.

Expected Behavior

Visually represent snapshot dependencies when they occur, "tree" wise representation like with qcow2 snapshots.

Actual Behavior

Snapshots seem to be independent, but in fact are not.

How to reproduce

ONE CLI POINT OF VIEW

VM DISKS
ID DATASTORE TARGET IMAGE SIZE TYPE SAVE
0 BITED-1526 sda stefan-snapshot-issue-test01 -/10G rbd YES
1 - hda CONTEXT -/- - -

CEPH "rbd ls" POINT OF VIEW

one-596 10240M 2 excl

  • take snapshot "snap1"

ONE CLI POINT OF VIEW

VM DISKS
ID DATASTORE TARGET IMAGE SIZE TYPE SAVE
0 BITED-1526 sda stefan-snapshot-issue-test01 -/10G rbd YES
1 - hda CONTEXT -/- - -

VM DISK SNAPSHOTS
AC ID DISK PARENT DATE SIZE NAME
=> 0 0 -1 05/01 16:13:24 -/10G snap1

CEPH "rbd ls" POINT OF VIEW
one-596 10240M 2 excl
one-596@0 10240M 2 yes

  • take snapshot "snap2"

ONE CLI POINT OF VIEW
VM DISKS
ID DATASTORE TARGET IMAGE SIZE TYPE SAVE
0 BITED-1526 sda stefan-snapshot-issue-test01 -/10G rbd YES
1 - hda CONTEXT -/- - -

VM DISK SNAPSHOTS
AC ID DISK PARENT DATE SIZE NAME
0 0 -1 05/01 16:13:24 -/10G snap1
=> 1 0 -1 05/01 16:14:53 -/10G snap2

CEPH "rbd ls" POINT OF VIEW
one-596 10240M 2 excl
one-596@0 10240M 2 yes
one-596@1 10240M 2 yes

  • revert to snapshot "snap1"

ONE CLI POINT OF VIEW
VM DISKS
ID DATASTORE TARGET IMAGE SIZE TYPE SAVE
0 BITED-1526 sda stefan-snapshot-issue-test01 -/10G rbd YES
1 - hda CONTEXT -/- - -

VM DISK SNAPSHOTS
AC ID DISK PARENT DATE SIZE NAME
=> 0 0 -1 05/01 16:13:24 -/10G snap1
1 0 -1 05/01 16:14:53 -/10G snap2
CEPH "rbd ls" POINT OF VIEW
one-596 10240M BITED-152641-TESTCLOUD/one-596-0:1@0 2
one-596-0:1 10240M 2
one-596-0:1@0 10240M 2 yes
one-596-0:1@1 10240M 2 yes

  • delete snapshot "snap2"

ONE CLI POINT OF VIEW
VM DISKS
ID DATASTORE TARGET IMAGE SIZE TYPE SAVE
0 BITED-1526 sda stefan-snapshot-issue-test01 -/10G rbd YES
1 - hda CONTEXT -/- - -

VM DISK SNAPSHOTS
AC ID DISK PARENT DATE SIZE NAME
=> 0 0 -1 05/01 16:13:24 -/10G snap1

CEPH "rbd ls" POINT OF VIEW
one-596 10240M BITED-152641-TESTCLOUD/one-596-0@0 2 excl
one-596-0 10240M 2
one-596-0@0 10240M 2 yes

  • create snapshot "snap3" (based on reverted snapshot "snap1")

ONE CLI POINT OF VIEW
VM DISKS
ID DATASTORE TARGET IMAGE SIZE TYPE SAVE
0 BITED-1526 sda stefan-snapshot-issue-test01 -/10G rbd YES
1 - hda CONTEXT -/- - -

VM DISK SNAPSHOTS
AC ID DISK PARENT DATE SIZE NAME
0 0 -1 05/01 16:13:24 -/10G snap1
=> 1 0 -1 05/01 16:20:02 -/10G snap3

CEPH "rbd ls" POINT OF VIEW
one-596 10240M BITED-152641-TESTCLOUD/one-596-0@0 2 excl
one-596@1 10240M BITED-152641-TESTCLOUD/one-596-0@0 2 yes
one-596-0 10240M 2
one-596-0@0 10240M 2 yes

  • Try to delete "snap1": Error message: rbd: snapshot '0' is protected from removal.

So at this point there is a "dependency" of the snapshots which is not visible in cli/sunstone. No error is logged when trying to delete the snapshot (other than "Error" in VM template).

@atodorov-storpool

This comment has been minimized.

Copy link
Contributor

atodorov-storpool commented May 2, 2018

+1
IMO it should be a configuration variable(probably in TM_MAD_CONF?) to trigger the tree-view visualization

@vholer

This comment has been minimized.

Copy link
Member

vholer commented Sep 27, 2018

I can see a problem. It looks like that having tree and flat (list) only ONE snapshots inventory doesn't match how CEPH snapshots are managed. They are kind of both, e.g. if I have 5 snapshots, revert to snapshot ID=2, and do another 5 snapshots, the hierarchy might look like:

 +
 |- 0
 |- 1
 |- 2 -+- 5
 |     |- 6
 |     |- 7
 |     |- 8
 |     \- 9
 |- 3
 \- 4

Rescheduling the issue.

christian7007 added a commit to christian7007/one that referenced this issue Dec 13, 2018

@xorel xorel referenced this issue Dec 14, 2018

Closed

Can't remove CEPH snapshot #2723

0 of 7 tasks complete
@MedicMomcilo

This comment has been minimized.

Copy link

MedicMomcilo commented Dec 14, 2018

Please also note that this means that if you ever revert to a earlier snapshot it becomes unremovable.
So, even with a single snapshot, you wouldn't be able to remove it if you ever reverted to it.

christian7007 added a commit to christian7007/one that referenced this issue Dec 14, 2018

@christian7007 christian7007 self-assigned this Dec 20, 2018

rsmontero added a commit that referenced this issue Dec 24, 2018

B #2052: Add mixed mode for ALLOW_ORPHANS to accomodate Ceph snapshots
dependencies
Co-authored-by: Christian González <cgonzalez@opennebula.systems>
@jsfrerot

This comment has been minimized.

Copy link

jsfrerot commented Jan 15, 2019

To reply to MedicMomcilo, not being able to remove a snapshot your reverted to prevents you from resizing the volume for that VM. I ran into this problem, and the only way I was able to fix this issue is to manually flatten the working volume with ceph commands, then I would be able to remove the snapshot and then resize the disk with opennebula.

Should I report a new bug with this information ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment