Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement the ceph osd destroy feature in ceph-deploy. #254

Closed

Conversation

Vicente-Cheng
Copy link
Contributor

Feature:
    1. handle osd already out
    2. handle osd did not blong any hostname
    3. handle osd is not in acting set

Usage: ceph-deploy osd destroy <hostname> --osd-id <osd_id>

I have tested in CentOS6/7, Ubuntu12.04/14.04 and it work normally.
This feature can help to remove osd quickly. It maybe is helpful for
newbie who uses ceph.

If you have any questions, feel free to let me know.
Thanks!

Signed-off-by: Vicente Cheng <freeze.bilsted@gmail.com>

    Feature:
        1. handle osd already out
        2. handle osd did not blong any hostname
        3. handle osd is not in acting set

    Usage: ceph-deploy osd destroy <hostname> --osd-id <osd_id>

    I have tested in CentOS6/7, Ubuntu12.04/14.04 and it work normally.
    This feature can help to remove osd quickly. It maybe is helpful for
    newbie who uses ceph.

    If you have any questions, feel free to let me know.
    Thanks!

    Signed-off-by: Vicente Cheng <freeze.bilsted@gmail.com>
@ceph-jenkins
Copy link
Collaborator

Can one of the admins OK this pull request so I can run a build?

@codenrhoden
Copy link
Contributor

I'll be looking at this during the next week. Definitely want to work with you to get this feature added.

@Vicente-Cheng
Copy link
Contributor Author

@trhoden
That sounds good.
We would work together to complete this feature!

'osd',
'tree',
'--format=json',
]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After looking at this for a bit, I have a few suggestions/requests. Starting here, I don't think we need to call ceph osd tree explicity here. Contained in this osd.py source file there is already an osd_tree() function that can be re-used.

I also think it would be worthwhile to look at the code in osd_list(). Specifically the part about gathering the osd tree from a monitor node. This code, as written, goes to each server hosting the OSD and calls "ceph osd tree". however, it is possible that OSD nodes do not have the cephx admin key that allows that command to work. It would be better to gather the osd tree once from a monitor node up in destroy(), before going into the loop.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I completely agree your opinion.
It sounds more sensible abut we use the osd_tree() function.

OK, I think I will check osd_list() and osd_tree() first.
Then the better way is that we can gather the osd tree once from monitor node when we call destroy() function by osd_list()/osd_tree() ?

@codenrhoden
Copy link
Contributor

@Vicente-Cheng I wanted to point you to http://tracker.ceph.com/issues/7454, which has some details about how we ultimately want to roll this capability into Ceph.

A lot of the logic contained in this PR can be in ceph-disk instead. ceph-deploy uses ceph-disk where it can, so it doesn't have to concern itself with too many low-level disk details. We would like to remove OSDs via ceph-disk as well, then ultimately have ceph-deploy use the relevant ceph-disk command instead.

This also makes a lot of the logic required to remove an OSD re-usable by other tools besides ceph-deploy.

@Vicente-Cheng
Copy link
Contributor Author

@trhoden
OK, I will study this issue and its discussion (w/ related issues) and think how we implement this feature via ceph-disk.

@Vicente-Cheng
Copy link
Contributor Author

@trhoden
After I see the disscussion from the mail list about this feature, I need to re-work abuut this feature?

It make sense for me about two subcommand deactivate and destroy.
I will implement deactivate first.

    Using the osd_tree() function to complete my goal.

Signed-off-by: Vicente Cheng <freeze.bilsted@gmail.com>
@Vicente-Cheng
Copy link
Contributor Author

@trhoden
I remove the manual "ceph osd tree" called component.
To complete my goal (get the osd tree) by osd_tree() function.

I plan to give the "ceph-deploy osd" two subcommand like our discussion "deactivate" and "destroy".
The "destroy" will have dependency on "deactivate". how did you think?

As you work on "deactivate", I am going to work on ceph-disk "destroy".
When these two subcommand complete, I will modify the ceph-deploy to complete these two features.

@ghost
Copy link

ghost commented Oct 5, 2015

@Vicente-Cheng closing because I'm assuming your latest contribution to ceph-disk address the same problem. Feel free to re-open if I'm mistaken.

@ghost ghost closed this Oct 5, 2015
@Vicente-Cheng
Copy link
Contributor Author

@dachary Sure, I will complete the ceph-disk feature first.
Then ceph-deploy could use the ceph-disk related feature to complete it.

This pull request was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants