New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ceph-disk: ability to use a different cluster name with dmcrypt #11786
Conversation
@@ -2377,13 +2379,14 @@ def set_or_create_partition(self): | |||
self.args.lockbox) | |||
self.partition = self.create_partition() | |||
|
|||
def create_key(self): | |||
def create_key(self, cluster): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is no need for requiring cluster here, you can rely on self.args
since that will have the arguments passed in:
self.args.cluster
key_size = CryptHelpers.get_dmcrypt_keysize(self.args) | ||
key = open('/dev/urandom', 'rb').read(key_size / 8) | ||
base64_key = base64.b64encode(key) | ||
command_check_call( | ||
[ | ||
'ceph', | ||
'--cluster', cluster, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you would be able to just use that self.args.cluster
in here, which already defaults to ceph
if not explicitly defined
8b95a96
to
1edc9f8
Compare
Would you please file a ticket at http://tracker.ceph.com for this, and add the URL of the ticket to the commit log, so we can track this and ensure it is backported to Jewel eventually? |
@ktdreyer sure, see: http://tracker.ceph.com/issues/17821 |
I'll update the PR with a definitive fix today. |
6ba659e
to
2a75124
Compare
@leseb could you put the line of |
2a75124
to
b45ccb0
Compare
@tchaikov just did, thanks! |
Please remove "DNM" from the title when this PR is ready for review and merging. |
b45ccb0
to
8ee40d4
Compare
@ErwanAliasr1 and I reworked the patch, ready for review now :) |
WDYT about this patch series guys ? |
Would you please rebase this onto the latest master to ensure that Jenkins is happy with it? |
8ee40d4
to
7fb9560
Compare
@ktdreyer done |
this needs rebasing (and the test fails) |
metavar='NAME', | ||
default='ceph', | ||
help='ceph cluster name (default: ceph)', | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Activate does not have a --cluster argument because it can be determined with the find_cluster_by_uuid() function
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alright, let me try again without this, I forgot the error that made us think we needed to do this.
7fb9560
to
c923739
Compare
Ok testing is blocked by #12033 waiting for the backport to re-validate. |
c923739
to
05dc6e9
Compare
@dachary I don't have permission to push a branch to that repo, that's why I created a PR... |
@leseb you have permission to push, all ceph developers have permission and you're in the group, it must be something on your side that's not configured right. |
@dachary please see:
Am I doing something wrong? |
@dachary seems to be working now, branch pushed :) |
@leseb nice :-) What was the problem ? Do you have access to the sepia lab ? |
@dachary well looks like I wasn't part of Ceph team, then I got an email saying you added me and it was good :). |
Oo amazing. I was looking to verify if you were listed and mistook the "add" box for the "search" box... happy ending :-) Welcome to the Ceph team, always nice to have new members ... |
@leseb @dachary @GregMeno wip-yuri-testing2_2017_2_16 |
Will this be backported to Jewel by any chance? |
@neurodrone this is the tracker - http://tracker.ceph.com/issues/17821, it was marked backport to jewel and I have changed it to pending backport. It will get backported to jewel. |
With recent changes in ceph (ceph/ceph#11786), this change will allow ceph-deploy osd activate to complete without errors. Follow-on fix for http://tracker.ceph.com/issues/17821 Signed-off-by: Ganesh Mahalingam <ganesh.mahalingam@intel.com>
@dachary it would have been nice to default the cluster to 'ceph' here, we got a patch for ceph-deploy to add the cluster option there, Also would be nice to run ceph-deploy suite for ceph-disk changes as well. |
@dachary @vasukulkarni Should have updated both threads. #13527 is the patch submitted to ceph-disk to add the default. |
@leseb I was wrong merging this pull request. I misread #11786 (comment) and thought the tests were successfull. Instead they failed and as a result ceph-disk is now broken in master. Could you please fix this asap ? |
@leseb of course, entirely my fault, please accept my apologies |
@dachary @leseb Thanks for the update. i believe with this,ceph/ceph-deploy#430 will be needed for ceph-deploy with work. @vasukulkarni |
@dachary No worries :) |
@GregMeno ceph-disk activate does not have a --cluster argument but the function implementing it expects it. It does not show in the ceph-disk suite output because the part that fails is run via either systemd or udev. |
Prior to this commit we were not able to configure an OSD using dmcrypt
on a cluster with a different name than 'ceph'. Adding the command line
option to --cluster to fix this.
Signed-off-by: Sébastien Han seb@redhat.com