You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, if we try to add a base_bdev in an existing raid1 bdev, the following error will occur:
[2024-04-22 22:19:55.277481] bdev_raid.c:3292:raid_bdev_add_base_device: *ERROR*: no empty slot found in raid bdev 'test' for new base bdev 'local3'
It would be a nice addition to add the freedom to increase the number of available slot when a base_bdev is added and no empty slot is available or, allow to pre-create some empty slot on raid1 creation. The current workaround would be to create the raid with some temporary volume of the same size then remove them to have some available slot but it's not practical and efficient.
Use cases example
In the context of a kubernetes CSI using SPDK raid1 and RDMA nvmeOF feature to replicate data/lvol across spdk storage nodes, I would like to be able to:
Migrate a raid1 replicas between SPDK storage nodes (for maintenance) by creating a new replicas on an other node, wait the end of the rebuild process, then remove the old replicas (on the node that is going to be switched offline/maintenance).
Create a snapshot system that would use a clone (mirror) created using an empty raid1 base_bdev slot (this, to avoid using blobstore snapshots that deteriorate random i/o on the parent disk due to COW) :
Add base_bdev to raid1
wait rebuild
remove base_bdev from raid1 (that would be the snapshot)
clone (on demande) => (dd) using an offset to remove the superblock to a new raid1 bdev
The text was updated successfully, but these errors were encountered:
It would be a nice addition to add the freedom to increase the number of available slot when a base_bdev is added and no empty slot is available or, allow to pre-create some empty slot on raid1 creation.
These are two very different functionalities. Growing an array is simple for raid1 (and maybe concat) but for raid levels with striping it will require changing the data layout. I would prefer not to make a special implementation only for raid1, because sooner or later we will probably want to support it also for the other levels, but that may require considerable effort.
On the other hand, allowing to create a raid bdev as degraded (with some slots empty) should be much simpler. If this is good enough for you, I think we should start with this feature.
Having some empty-slot ready to be used on-demand provisioned in advance would fill my needs and allow me to proceed with the uses cases I exposed so I am good with it !
Be able to "bootstrap" a online raid1 with only one replicas and some available empty slots would also be nice.
Thanks for the fast feedback on this feature request post !
Suggestion
Currently, if we try to add a base_bdev in an existing raid1 bdev, the following error will occur:
It would be a nice addition to add the freedom to increase the number of available slot when a base_bdev is added and no empty slot is available or, allow to pre-create some empty slot on raid1 creation. The current workaround would be to create the raid with some temporary volume of the same size then remove them to have some available slot but it's not practical and efficient.
Use cases example
In the context of a kubernetes CSI using SPDK raid1 and RDMA nvmeOF feature to replicate data/lvol across spdk storage nodes, I would like to be able to:
Migrate a raid1 replicas between SPDK storage nodes (for maintenance) by creating a new replicas on an other node, wait the end of the rebuild process, then remove the old replicas (on the node that is going to be switched offline/maintenance).
Create a snapshot system that would use a clone (mirror) created using an empty raid1 base_bdev slot (this, to avoid using blobstore snapshots that deteriorate random i/o on the parent disk due to COW) :
The text was updated successfully, but these errors were encountered: