Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

doc: restructure bluestore migration insructions #17603

Merged
merged 1 commit into from Sep 8, 2017

Conversation

Projects
None yet
3 participants
@liewegas
Copy link
Member

commented Sep 8, 2017

Signed-off-by: Sage Weil sage@redhat.com


while ! ceph osd safe-to-destroy $(ceph osd ls-tree $NEWHOST); do sleep 60 ; done
If everything looks good, jump directly to the "Wait for data
migration to complete" step below and proceed from there to clean up

This comment has been minimized.

Copy link
@liewegas

liewegas Sep 8, 2017

Author Member

@alfredodeza is rst smart enough to let us make a link directly to the 5th item on the list below (vs a section heading or whatever like we usually do?)

@@ -116,30 +116,52 @@ to evacuate an entire host in order to use it as a spare, then the
conversion can be done on a host-by-host basis with each stored copy of
the data migrating only once.

#. Identify an empty host. Ideally the host should have roughly the
First, you need an empty host that has no data. There are two ways to do this:

This comment has been minimized.

Copy link
@djgalloway

djgalloway Sep 8, 2017

Contributor

How would you feel about making this bold or adding a header like "Identifying an empty host"


Otherwise, for a new host, we can start the conversion process from

This comment has been minimized.

Copy link
@djgalloway

djgalloway Sep 8, 2017

Contributor

And adding another header above this. Maybe "Bluestore OSD creation"

@liewegas liewegas force-pushed the liewegas:wip-migration-twiddle branch from c2f9afd to 25900cc Sep 8, 2017

@liewegas

This comment has been minimized.

Copy link
Member Author

commented Sep 8, 2017

How about this?

@djgalloway

This comment has been minimized.

Copy link
Contributor

commented Sep 8, 2017

Maybe I'm being pedantic but I'd prefer bold over italics.

doc: restructure bluestore migration insructions
Signed-off-by: Sage Weil <sage@redhat.com>

@liewegas liewegas force-pushed the liewegas:wip-migration-twiddle branch from 25900cc to 9fa4901 Sep 8, 2017

@liewegas

This comment has been minimized.

Copy link
Member Author

commented Sep 8, 2017

yeah me too, updated again

@liewegas liewegas merged commit 46ba645 into ceph:master Sep 8, 2017

2 of 5 checks passed

Docs: build check Docs: building
Details
make check running make check
Details
make check (arm64) running make check
Details
Signed-off-by all commits in this PR are signed
Details
Unmodified Submodules submodules for project are unmodified
Details

@liewegas liewegas deleted the liewegas:wip-migration-twiddle branch Sep 8, 2017


If you would like to use an existing host
that is already part of the cluster, and there is sufficient free

This comment has been minimized.

Copy link
@vasukulkarni

vasukulkarni Sep 8, 2017

Member

Does this mean the host is any non-OSD host(mon, client etc), Is it still applicable if the host has some osd id's and one wants to add additional osd's?

This comment has been minimized.

Copy link
@djgalloway

djgalloway Sep 8, 2017

Contributor

I don't see why not. But if there are existing OSDs and unused disks and you want to use the unused disks as bluestore OSDs, one would have to decide if they want to keep the existing filestore OSDs or convert them to bluestore.

This comment has been minimized.

Copy link
@vasukulkarni

vasukulkarni Sep 8, 2017

Member

rereading, i dont understand why 'whole host replacement' should use 'existing host' instead of 'new host', since this doc is talking about bluestore migration, there can be only 2 cases a) convert existing osd b) add new hosts with osds. so even 'existing host' should fall under b)

This comment has been minimized.

Copy link
@liewegas

liewegas Sep 8, 2017

Author Member

If you have a 10 node cluster and no extra hardware, the best strategy is probably to (1) evacuate 1 host, then (2) do this host-by-host conversion for 9 hosts, then add the final empty host back in.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.