-
Notifications
You must be signed in to change notification settings - Fork 14
How to setup a RAID1 block device with SPDK
In this wiki we will describe how to setup a block device representing 2 logical volumes put together in a raid1 mirroring using SPDK. One of this volume is local, i.e. is stored in the same node of the block device, and the other one is remote.
SPDK app can be controlled via JSON-RPC with the Python script rpc.py
, as described in this doc. rpc.py
can send commands to a remote SPDK instance too, but SPDK app listen only on a Unix socket. So, when we have to control both local and remote node, we have 2 options:
- send commands to SPDK app from its own node
- send commands to both instances from the same node using a socket redirection with
socat
, for example like this
socat TCP4-LISTEN:8000,fork,reuseaddr UNIX-CONNECT:/var/tmp/spdk.sock
SPDK app can even take commands at startup via JSON configuration file. This can be useful for example to make SPDK app attaching to the disk that will be used as base bdev for the Blobstore.
First of all we have to execute the operations described in this getting_started (maybe this system_configuration can be helpful too). The repo we will work on is this, the branch is longhorn
.
SPDK app can be started with the following command:
./build/bin/spdk_tgt --json disk.json
where disk.json
has this content:
{
"subsystems": [
{
"subsystem": "bdev",
"config": [
{
"method": "bdev_aio_create",
"params": {
"block_size": 4096,
"name":"Aio1",
"filename": "/dev/sdc"
}
}
]
}
]
}
/dev/sdc
in this example is the path to the block device of the physical disk.
All the commands described in this section are JSON-RPC commands to be sent as parameters with the script rpc.py
.
We cal this node node1
First of all we have to create a Logical Volume Store based on the aio bdev created at startup:
bdev_lvol_create_lvstore Aio1 lvstore1
and then we create a logical volume in this store (this one has a dimension of 10Mb)
bdev_lvol_create -l lvstore1 -t lvol1 10
First of all we have to create the transport for the fabric, in this case TCP
nvmf_create_transport -t tcp
then we create a nvmf subsystem
nvmf_create_subsystem nqn.2023-01.io.spdk:cnode1 -a -s SPDK00000000000021 -d SPDK_Controller
and we attach a namespace to this subsystem with
nvmf_subsystem_add_ns nqn.2023-01.io.spdk:cnode1 lvstore1/lvol1
To complete the operation we create a listener for this subsystem
nvmf_subsystem_add_listener nqn.2023-01.io.spdk:cnode1 -t tcp -a <node1_ipaddr> -s 4420
We call this node node0
Even in the local node we create a logical volume in the same way of the remote node:
bdev_lvol_create_lvstore Aio0 lvstore0
bdev_lvol_create -l lvstore0 -t lvol0 10
We have to create a bdev that represent the logical volume created on the remote node. That volume is exported via NVMe-oF, so we attach to its controller
bdev_nvme_attach_controller -b Nvme1 -t tcp -a <node1_ipaddr> -n nqn.2023-01.io.spdk:cnode1 -s 4420 -f ipv4
At this point we can create the RAID1 bdev with base bdevs the local logical volume and the remote one attached via nvmf
bdev_raid_create -n raid1-2 -r raid1 -b "lvstore0/lvol0 Nvme1n1"
Now we have to export via NVMe-oF the RAID1 bdev in the same way we exported the logical volume on the remote node
nvmf_create_transport -t tcp
nvmf_create_subsystem nqn.2023-01.io.spdk:cnode0 -a -s SPDK00000000000020 -d SPDK_Controller
nvmf_subsystem_add_ns nqn.2023-01.io.spdk:cnode0 raid1-2
nvmf_subsystem_add_listener nqn.2023-01.io.spdk:cnode0 -t tcp -a 127.0.0.1 -s 4422
Finally, we can use Linux nvme driver to attach to our RAID1 exported via NVMe-oF. First of all we have to load kernel module
modprobe nvme-tcp
And install nvme-cli
apt-get install nvme-cli
then we can discover the subsystem
nvme discover -t tcp -a 127.0.0.1 -s 4422
connect to it
nvme connect -t tcp -a 127.0.0.1 -s 4422 --nqn nqn.2023-01.io.spdk:cnode0
and finally get the block device with
nvme list