Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

storage: rework storage pool + volume updating #3835

Merged
merged 7 commits into from Sep 23, 2017

Conversation

@brauner
Copy link
Member

brauner commented Sep 22, 2017

Closes #3834.

Signed-off-by: Christian Brauner christian.brauner@ubuntu.com

@brauner brauner force-pushed the brauner:2017-09-22/storage_api_property_update branch 2 times, most recently from 851ecfd to 6af61d7 Sep 22, 2017
@brauner

This comment has been minimized.

Copy link
Member Author

brauner commented Sep 22, 2017

The ceph implementation for changing volume.block.filesystem in @albertodonato's branch is missing logic. ceph requires that the image is kept around when it has dependent snapshots. So when you force an ImageDelete() on container creation after a volume.block.filesystem update and the original container still exists you mark the image as delete and the next ImageCreate() request will restore the image that was simply marked as deleted which has the fs before the volume.block.filesystem change. That's why the ceph part of this patch still fails.

@brauner brauner force-pushed the brauner:2017-09-22/storage_api_property_update branch from 6af61d7 to 22b3228 Sep 22, 2017
@brauner

This comment has been minimized.

Copy link
Member Author

brauner commented Sep 22, 2017

ceph: handle volume.block.filesystem update
The ceph implementation for changing volume.block.filesystem in the cherry-picked
commits is missing logic. ceph requires that the image is kept around when it
has dependent snapshots. So when you force an ImageDelete() on container
creation after a volume.block.filesystem update and the original container
still exists you mark the image as delete and the next ImageCreate() request
will restore the image that was simply marked as deleted which has the fs
before the volume.block.filesystem change. That's why the ceph part of this
patch still fails.

To clarify my point a little more concrete:

lxc storage create pool ceph volume.block.filesystem=ext4
  - creates "pool/image_<fp>" + "pool/image_<fp>@readonly"
lxc launch images:alpine/edge a1 -s pool
  - clones "pool/image_<fp>@readonly"

lxc storage set pool volume.block.filesystem xfs
lxc launch images:alpine/edge a2 -s pool
  - zombifies "pool/image_<fp>" to "pool/zombie_image_<fp>"
  - tries to create xfs version of image with fp <fp>
    - checks for the existence of any zombified images
      - will detect that "pool/zombie_image_<fp>" exists
        - resurrects "pool/zombie_image_"fp">" to "pool/image_<fp>"
  - clones new container from "pool/image_<fp>"
  - calls xfs_admin for UUID generation
    - fails because clone is ext4

My new logic changes the image image deletion routine and the container create
from image rountine to:

ImageDelete()
- if (!has_dependent_snapshots())
	wipe_image_from_disk()
  else
  	zombifie_image() {
		rename "pool/image_<fp>" to "pool/image_<fp>_<fstype>"
	}

ContainerCreateFromImage()
- if (zombified_image_with_fstype_exists())
	resurrect()

Backwards compatibility:
This change is fully backwards compatible. Images that have been zombified
before will not be resurrected anymore since the new checks check for the
existence of the fs type suffix as well.

Signed-off-by: Christian Brauner christian.brauner@ubuntu.com

@brauner brauner force-pushed the brauner:2017-09-22/storage_api_property_update branch 2 times, most recently from 7b4afba to 0ada7e9 Sep 22, 2017
@brauner

This comment has been minimized.

Copy link
Member Author

brauner commented Sep 22, 2017

@stgraber, ready for review/merge.

@brauner brauner force-pushed the brauner:2017-09-22/storage_api_property_update branch from 0ada7e9 to d0c063c Sep 22, 2017
@brauner brauner changed the title storage: rework storage pool updating storage: rework storage pool + volume updating Sep 22, 2017
brauner and others added 7 commits Sep 22, 2017
Closes #3834.

Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>
Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>
…stem has changed

Signed-off-by: Alberto Donato <alberto.donato@canonical.com>
Signed-off-by: Alberto Donato <alberto.donato@canonical.com>
The ceph implementation for changing volume.block.filesystem the cherry-picked
commits is missing logic. ceph requires that the image is kept around when it
has dependent snapshots. So when you force an ImageDelete() on container
creation after a volume.block.filesystem update and the original container
still exists you mark the image as delete and the next ImageCreate() request
will restore the image that was simply marked as deleted which has the fs
before the volume.block.filesystem change. That's why the ceph part of this
patch still fails.

To clarify my point a little more concrete:

	lxc storage create pool ceph volume.block.filesystem=ext4
	  - creates "pool/image_<fp>" + "pool/image_<fp>@readonly"
	lxc launch images:alpine/edge a1 -s pool
	  - clones "pool/image_<fp>@readonly"

	lxc storage set pool volume.block.filesystem xfs
	lxc launch images:alpine/edge a2 -s pool
	  - zombifies "pool/image_<fp>" to "pool/zombie_image_<fp>"
	  - tries to create xfs version of image with fp <fp>
	    - checks for the existence of any zombified images
	      - will detect that "pool/zombie_image_<fp>" exists
	        - resurrects "pool/zombie_image_"fp">" to "pool/image_<fp>"
	  - clones new container from "pool/image_<fp>"
	  - calls xfs_admin for UUID generation
	    - fails because clone is ext4

My new logic changes the image image deletion routine and the container create
from image rountine to:

	ImageDelete()
	- if (!has_dependent_snapshots())
		wipe_image_from_disk()
	  else
	  	zombifie_image() {
			rename "pool/image_<fp>" to "pool/image_<fp>_<fstype>"
		}

	ContainerCreateFromImage()
	- if (!zombified_image_with_fstype_exists())
		resurrect()

Backwards compatibility:
This change is fully backwards compatible. Images that have been zombified
before will not be resurrected anymore since the new checks check for the
existence of the fs type suffix as well.

Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>
Otherwise we will not be able to delete missing storage volumes.

Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>
Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>
@brauner brauner force-pushed the brauner:2017-09-22/storage_api_property_update branch from d0c063c to 13bea7e Sep 22, 2017
@brauner

This comment has been minimized.

Copy link
Member Author

brauner commented Sep 22, 2017

Testrun finished all-green. Jenkins status is off. Here's the correct url:

https://jenkins.linuxcontainers.org/view/LXD/view/LXD%20builds/job/lxd-github-pull-test/3164/

@stgraber stgraber merged commit 5497b1a into lxc:master Sep 23, 2017
4 of 5 checks passed
4 of 5 checks passed
Testsuite Test pending
Details
Branch target Branch target is correct
Details
Signed-off-by All commits signed-off
Details
continuous-integration/appveyor/pr AppVeyor build succeeded
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
3 participants
You can’t perform that action at this time.