-
Notifications
You must be signed in to change notification settings - Fork 110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FEATURE: DRBD_META_SIZE configurable at cluster level and at instance creation time #1641
Comments
I remember we discussed this problem during the last GanetiCon. Simply changing the value to something bigger was deemed the right solution. However I think I recall it was unclear how to handle the upgrade path (or more specifically: how to handle a possible downgrade path, once existing devices have been resized during the upgrade). Does anyone have any ideas on how to handle that gracefully? |
To have DRBD_META_SIZE constant modified regularly on our cluster, I can tell about our experience:
Our 10 years production cluster have currently many different drbd meta sizes:
We add regularly disks with different drbd meta disk sizes. So I think that we don't need a specific upgrade/downgrade path because:
The only things to take in consideration are:
Notes:
This is the script that we use on our cluster if you want to make some lab tests: gnt-drbd-meta-disk-size
|
Practically once a DRBD volume is created with a specific metadata size it will be kept for the whole DRBD volume life time until the DRBD device is removed. So if you need to change the DRBD metadata size of an existing volume you need to:
So in practice there is no problem for upgrade/downgrade path. Ideally, DRBD metadata disk size need to be set up at:
That could be managed with ganeti options like this:
Note: real metadata disk size need to be calculated from maximum disk size and have to be limited to 32T (for DRBD 8.4 and PB for DRBD 9.4). |
The actual limit of
DRBD_META_SIZE = 128
in/lib/_constants.py
permit to have a maximum disk size of 4Tb.On our cluster we change this limit to be able to create disk bigger that 4Tb and especially to be able to grow up disk beyond the 4Tb hard limit.
Few years ago we use a hook script that calculate an adapted size for DRBD_META_SIZE depending on DRBD disk size that we need and then replace the value directly in _constants.py. We clearly understand later that DRBD_META_SIZE have to be a global fixed value to be able to grow up disk later. So actually we use a bash script to modify DRBD_META_SIZE on all nodes once and at node creation actually.
The hardcoded 128M limit of DRBD_META_SIZE is clearly a strong limit for a ganeti cluster.
We manage up to 32Tb volume on our cluster. So I think this limit should be configurable at cluster level.
Maximum DRBD disk size is depending on DRBD meta size so it could be more intuitive to have a configurable maximum DRBD size and then calculate the corresponding DRBD meta size:
DRBD_META_SIZE = 1 + DRBD_MAX_SIZE / 32768
A good addition and alternative is to be able to specify a maximum drbd disk size (so drbd meta size) at instance creation time:
gnt-instance add -o image -t drbd --disk 0:size=20G --disk 1:size=2T,maxsize=8T -n node1:node2 my.instance
This instance will have:
disk_drbd_meta_size = 8 * 1024^2 / 32768 + 1 = 257 Mb
To be able to modify at cluster level and at instance creation time the DRBD meta size can permit ganeti to be more scalable at DRBD disk level.
The text was updated successfully, but these errors were encountered: