A daemon that handles the userspace side of the NBD(Network Block Device) backstore.
A cli utility, which aims at making backstore creation/deletion/mapping/unmaping/listing.
nbd-runner nbd-cli
+----------------------+ +---------------------+
| | | |
| | | |
+-----------+ | CONTROL HOST IP | RPC control route | create/delete |
| | | listen on <---------------------> map/unmap/list, etc |
| Gluster <-----> TCP/24110 port | | + |
| | | | | | |
+-----------+ | | | | |
| | | | MAP will |
| | | | setup |
| | | | the NBD |
| | | | devices |
| | | | |
+-----------+ | | | | |
| | | | | | |
| Ceph <-----> | | | READ |
| | | IO HOST IP | MAPPED NBD(IO) route| v WRITE |
+-----------+ | listen on <-------------------->+ /dev/nbdXX FLUSH |
| TCP/24111 port | | TRIM |
| | | ... |
| | | |
+----------------------+ +---------------------+
NOTE: The 'CONTROL HOST IP' and the 'IO HOST IP' could be same or different, and the 'nbd-runner' and 'nbd-cli' could run on the same node or in different nodes, both are up to your use case.
nbd-runner is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation.
# git clone https://github.com/gluster/nbd-runner.git # cd nbd-runner/ # dnf install autoconf automake libtool glusterfs-api-devel kmod-devel libnl3-devel libevent-devel glib2-devel json-c-devel # dnf install libtirpc-devel rpcgen # only in Fedora or some other Distributions that the glibc version >= 2.26 # ./autogen.sh # ./configure # '--with-tirpc=no' means try to use legacy glibc, otherwise use libtirpc by default, '--with-gfapi6' means use GFAPI version >= 6.0 # make -j # make install
NOTE: Glibc has removed the rpc functions from the 2.26 release. Instead of relying on glibc providing these, the modern libtirpc library should be used instead. For the old glibc version or some distribute Linux we will still use the glibc instead to privide the RPC library.
Prerequisites: this guide assumes that the following are already present
- The kernel or the nbd.ko module must be new enough, which have add the netlink feature supported
- Open 24110 and 24111(nbd-runnerd) 111(rpcbind) ports in your firewall
Daemon: run nbd-runner on the node where you can access the gluster through gfapi
# nbd-runner help
Usage:
nbd-runner [<args>]
Commands:
help
Display help for nbd-runner command
threads <NUMBER>
Specify the IO thread number for each mapped backstore, 1 as default
rpchost <CONTROL_HOST>
Specify the listenning IP for the nbd-runner server to receive/reply the control
commands(create/delete/map/unmap/list, etc) from nbd-cli, INADDR_ANY as default
iohost <IO_HOST>
Specify the listenning IP for the nbd-runner server to receive/reply the NBD device's
IO operations(WRITE/READ/FLUSH/TRIM, etc), INADDR_ANY as default
version
Show version info and exit.
NOTE:
The CONTROL_HOST and the IO_HOST will be useful if you'd like the control commands
route different from the IOs route via different NICs, or just omit them as default
CLI: you can choose to run nbd-cli from any node where the newer nbd.ko module is availible
# nbd-cli help
Usage:
gluster help
Display help for gluster commands
ceph help [TODO]
Display help for ceph commands
global help [TODO]
Display help for global commands
version
Display the version of nbd-cli
Gluster is a well known scale-out distributed storage system, flexible in its design and easy to use. One of its key goals is to provide high availability of data. Gluster is very easy to setup and use. Addition and removal of storage servers from a Gluster cluster is intuitive. These capabilities along with other data services that Gluster provides makes it a reliable software defined storage platform.
A unique distributed storage solution build on traditional filesystems
Prerequisites: this guide assumes that the following are already present
- A gluster volume must be created/started first
- Open 24007(for glusterd) port and glusterfs service in your firewall
-
Create a volume in the gluster stoarge cluster.
-
Run the nbd-runner daemon in any of the gluster storage cluster node, or any other node that can access the gluster volume via the gfapi library.
# nbd-runner [<args>]
-
Create one file in the volume by using the gluster cli tool or just use the 'nbd-cli gluster create' tool.
# mount.glusterfs HOST:/VOLUME /mnt && fallocate -l 1G /mnt/FILEPATH
or
# nbd-cli gluster create <VOLUME@GLUSTER_HOST:/FILEPATH> [prealloc] <size SIZE> <host CONTROL_HOST>
-
Map the file created in backstore gluster volume to the NBD device(in local host), you can specify one unmapped /dev/nbdXX or just omit it and then the NBD kernel module will allocate one for you.
# nbd-cli gluster map <VOLUME@GLUSTER_HOST:/FILEPATH> [nbd-device] [timeout TIME] [readonly] <host CONTROL_HOST>
-
You will see the mapped NBD device returned and displayed, or you can check the mapped device info by:
# nbd-cli gluster list <map|unmap|create|dead|live|all> <host CONTROL_HOST>
-
We expose the file in the gluster volume as NBD device using nbd-runner, exporting the target file as block device via /dev/nbdXX
Gluster CLI: the gluster specified cli commands
# nbd-cli gluster help
Usage:
gluster help
Display help for gluster commands
gluster create <VOLUME@GLUSTER_HOST:/FILEPATH> [prealloc] <size SIZE> <host CONTROL_HOST>
Create FILEPATH in the VOLUME, prealloc is false as default, and the SIZE is valid
with B, K(iB), M(iB), G(iB), T(iB), P(iB), E(iB), Z(iB), Y(iB)
gluster delete <VOLUME@GLUSTER_HOST:/FILEPATH> <host CONTROL_HOST>
Delete FILEPATH from the VOLUME
gluster map <VOLUME@GLUSTER_HOST:/FILEPATH> [nbd-device] [timeout TIME] [readonly] <host CONTROL_HOST>
Map FILEPATH to the nbd device, as default the timeout 0, none readonly
gluster unmap <nbd-device> <host CONTROL_HOST>
Unmap the nbd device
gluster list [map|unmap|create|dead|live|all] <host CONTROL_HOST>
Dist the mapped|unmapped NBD devices or the created|dead|live backstores, all as
default. 'create' means the backstores are just created or unmapped. 'dead' means
the IO connection is lost, this is mainly due to the nbd-runner service is restart
without unmapping. 'live' means everything is okay for both mapped and IO connection.