New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement the ceph osd destroy feature in ceph-deploy. #254
Implement the ceph osd destroy feature in ceph-deploy. #254
Conversation
Vicente-Cheng
commented
Dec 4, 2014
Feature: 1. handle osd already out 2. handle osd did not blong any hostname 3. handle osd is not in acting set Usage: ceph-deploy osd destroy <hostname> --osd-id <osd_id> I have tested in CentOS6/7, Ubuntu12.04/14.04 and it work normally. This feature can help to remove osd quickly. It maybe is helpful for newbie who uses ceph. If you have any questions, feel free to let me know. Thanks! Signed-off-by: Vicente Cheng <freeze.bilsted@gmail.com>
Can one of the admins OK this pull request so I can run a build? |
I'll be looking at this during the next week. Definitely want to work with you to get this feature added. |
@trhoden |
'osd', | ||
'tree', | ||
'--format=json', | ||
] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After looking at this for a bit, I have a few suggestions/requests. Starting here, I don't think we need to call ceph osd tree explicity here. Contained in this osd.py source file there is already an osd_tree() function that can be re-used.
I also think it would be worthwhile to look at the code in osd_list(). Specifically the part about gathering the osd tree from a monitor node. This code, as written, goes to each server hosting the OSD and calls "ceph osd tree". however, it is possible that OSD nodes do not have the cephx admin key that allows that command to work. It would be better to gather the osd tree once from a monitor node up in destroy(), before going into the loop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I completely agree your opinion.
It sounds more sensible abut we use the osd_tree() function.
OK, I think I will check osd_list() and osd_tree() first.
Then the better way is that we can gather the osd tree once from monitor node when we call destroy() function by osd_list()/osd_tree() ?
@Vicente-Cheng I wanted to point you to http://tracker.ceph.com/issues/7454, which has some details about how we ultimately want to roll this capability into Ceph. A lot of the logic contained in this PR can be in ceph-disk instead. ceph-deploy uses ceph-disk where it can, so it doesn't have to concern itself with too many low-level disk details. We would like to remove OSDs via ceph-disk as well, then ultimately have ceph-deploy use the relevant ceph-disk command instead. This also makes a lot of the logic required to remove an OSD re-usable by other tools besides ceph-deploy. |
@trhoden |
@trhoden It make sense for me about two subcommand deactivate and destroy. |
Using the osd_tree() function to complete my goal. Signed-off-by: Vicente Cheng <freeze.bilsted@gmail.com>
@trhoden I plan to give the "ceph-deploy osd" two subcommand like our discussion "deactivate" and "destroy". As you work on "deactivate", I am going to work on ceph-disk "destroy". |
@Vicente-Cheng closing because I'm assuming your latest contribution to ceph-disk address the same problem. Feel free to re-open if I'm mistaken. |
@dachary Sure, I will complete the ceph-disk feature first. |