Skip to content

How to setup a RAID1 block device with SPDK

Damiano Cipriani edited this page Feb 27, 2023 · 5 revisions

Summary

In this wiki we will describe how to setup a block device representing 2 logical volumes put together in a raid1 mirroring using SPDK. One of this volume is local, i.e. is stored in the same node of the block device, and the other one is remote.

Introduction

SPDK app can be controlled via JSON-RPC with the Python script rpc.py, as described in this doc. rpc.py can send commands to a remote SPDK instance too, but SPDK app listen only on a Unix socket. So, when we have to control both local and remote node, we have 2 options:

  • send commands to SPDK app from its own node
  • send commands to both instances from the same node using a socket redirection with socat, for example like this
socat TCP4-LISTEN:8000,fork,reuseaddr UNIX-CONNECT:/var/tmp/spdk.sock

SPDK app can even take commands at startup via JSON configuration file. This can be useful for example to make SPDK app attaching to the disk that will be used as base bdev for the Blobstore.

App start

First of all we have to execute the operations described in this getting_started (maybe this system_configuration can be helpful too). The repo we will work on is this, the branch is longhorn.

SPDK app can be started with the following command:

./build/bin/spdk_tgt --json disk.json

where disk.json has this content:

{
    "subsystems": [
        {
            "subsystem": "bdev",
            "config": [
                {
                    "method": "bdev_aio_create",
                    "params": {
                        "block_size": 4096,
                        "name":"Aio1",
                        "filename": "/dev/sdc"
                    }
                }
            ]
        }
    ]
}

/dev/sdc in this example is the path to the block device of the physical disk.

SPDK RAID1 setup

All the commands described in this section are JSON-RPC commands to be sent as parameters with the script rpc.py.

Remote node configuration

We cal this node node1

Volume creation

First of all we have to create a Logical Volume Store based on the aio bdev created at startup:

bdev_lvol_create_lvstore Aio1 lvstore1

and then we create a logical volume in this store (this one has a dimension of 10Mb)

bdev_lvol_create -l lvstore1 -t lvol1 10

Volume export via NVMe-oF

First of all we have to create the transport for the fabric, in this case TCP

nvmf_create_transport -t tcp

then we create a nvmf subsystem

nvmf_create_subsystem nqn.2023-01.io.spdk:cnode1 -a -s SPDK00000000000021 -d SPDK_Controller

and we attach a namespace to this subsystem with

nvmf_subsystem_add_ns nqn.2023-01.io.spdk:cnode1 lvstore1/lvol1

To complete the operation we create a listener for this subsystem

nvmf_subsystem_add_listener nqn.2023-01.io.spdk:cnode1 -t tcp -a <node1_ipaddr> -s 4420

Local node configuration

We call this node node0

Volume creation

Even in the local node we create a logical volume in the same way of the remote node:

bdev_lvol_create_lvstore Aio0 lvstore0
bdev_lvol_create -l lvstore0 -t lvol0 10

Attach to remote NVMe-oF subsystem

We have to create a bdev that represent the logical volume created on the remote node. That volume is exported via NVMe-oF, so we attach to its controller

bdev_nvme_attach_controller -b Nvme1 -t tcp -a <node1_ipaddr> -n nqn.2023-01.io.spdk:cnode1 -s 4420 -f ipv4

RAID1 creation

At this point we can create the RAID1 bdev with base bdevs the local logical volume and the remote one attached via nvmf

bdev_raid_create -n raid1-2 -r raid1 -b "lvstore0/lvol0 Nvme1n1"

RAID1 export

Now we have to export via NVMe-oF the RAID1 bdev in the same way we exported the logical volume on the remote node

nvmf_create_transport -t tcp
nvmf_create_subsystem nqn.2023-01.io.spdk:cnode0 -a -s SPDK00000000000020 -d SPDK_Controller
nvmf_subsystem_add_ns nqn.2023-01.io.spdk:cnode0 raid1-2
nvmf_subsystem_add_listener nqn.2023-01.io.spdk:cnode0 -t tcp -a 127.0.0.1 -s 4422

Block device creation

Finally, we can use Linux nvme driver to attach to our RAID1 exported via NVMe-oF. First of all we have to load kernel module

modprobe nvme-tcp

And install nvme-cli

apt-get install nvme-cli

then we can discover the subsystem

nvme discover -t tcp -a 127.0.0.1 -s 4422

connect to it

nvme connect -t tcp -a 127.0.0.1 -s 4422 --nqn nqn.2023-01.io.spdk:cnode0

and finally get the block device with

nvme list