Switch branches/tags
Nothing to show
Find file Copy path
583 lines (442 sloc) 21.3 KB

LINSTOR is a powerful component in the DRBD SDS stack, but it is still an optional component. If you prefere to manage your resources manually, please check the section on manual administration: [ch-admin-manual].

Common administration

LINSTOR is a configuration management system for storage on Linux systems. It manages LVM logical volumes and/or ZFS ZVOLs on a cluster of nodes. It leverages DRBD for replication between different nodes and to provide block storage devices to users and applications. It manages snapshots, encryption and caching of HDD backed data in SSDs via bcache.

This chapter outlines typical administrative tasks encountered during day-to-day operations. It does not cover troubleshooting tasks, these are covered in detail in [ch-troubleshooting].

Concepts and Terms

A LINSTOR setup has exactly one active controller and multiple satellites. The linstor-controller contains the database that holds all configuration information for the whole cluster. It makes all decisions that need to have a view of the whole cluster. The controller is typically deployed as a HA service using Pacemaker and DRBD as it is a crucial part of the system.

The linstor-satellite runs on each node where LINSTOR consumes local storage or provides storage to services. It is stateless; it receives all the information it needs from the controller. It runs programs like lvcreate and drbdadm. It acts like a node agent.

The linstor-client is a command line utility that you use to issue commands to the system and to investigate the status of the system.

Broader Context

While LINSTOR might be used to make the management of DRBD more convenient, it is often integrated with software stacks higher up. Such integrations exist already for Kubernetes, and are in progress for OpenStack, OpenNebula, and Proxmox.

The southbound drivers used by LINSTOR are LVM, thinLVM and ZFS with support for Swordfish in progress.


LINSTOR is packaged in both the .rpm and the .deb variants:

  1. linstor-client contains the command line client program. It only depends on python which is usually already installed.

  2. linstor-controller contains the controller and linstor-satellite the satellite. These packages also provide systemd unit files for the services. It depends on a Java runtime environment (JRE) version 1.8 (headless) or higher. This might pull in about 100MB of dependencies.

Initializing your cluster

We assume that the following steps are accomplished on all cluster nodes:

  1. The DRBD9 kernel module is installed and loaded

  2. drbd-utils are installed

  3. LVM tools are installed

  4. linstor-controller and/or linstor-satellite its dependencies are installed

Start the linstor-controller service:

# systemctl start linstor-controller

Migrating resources from DRBDManage

The LINSTOR client contains a sub-command that can generate a migration script that adds existing DRBDManage nodes and resources to a LINSTOR cluster. Migration can be done without downtime. If you do not plan to migrate existing resources, continue with the next section.

The first thing to check is if the DRBDManage cluster is in a healthy state. If the output of drbdmanage assignments looks good, you can export the existing cluster database via drbdmanage export-ctrlvol > ctrlvol.json. You can then use that as input for the LINSTOR client. The client does not immediately migrate your resources, it just generates a shell script. Therefore, you can run the migration assistant multiple times and review/modify the generated shell script before actually executing it. Migration script generation is started via linstor dm-migrate ctrlvol.json The script will ask a few questions and then generate the shell script. After carefully reading the script, you can then shutdown DRBDManage, and rename the following files. If you do not rename them, the lower-level drbd-utils pick up both kinds of resource files, the ones from DRBDManage and the ones from LINSTOR.

Obviously, you need the linstor-controller service started on one node and the linstor-satellite service on all nodes.

# drbdmanage shutdown -qc # on all nodes
# mv /etc/drbd.d/drbdctrl.res{,.dis} # on all nodes
# mv /etc/drbd.d/drbdmanage-resources.res{,.dis} # on all nodes
# bash

Using the LINSTOR client

Whenever you run the LINSTOR command line client, it needs to know where your linstor-controller runs. If you do not specify it, it will try to reach a locally running linstor-controller listening on IP port 3376.

# linstor node list

should give you an empty list and not an error message.

You can use the linstor command on any other machine, but then you need to tell the client how to find the linstor-controller. As shown, this can be specified as a command line option, an environment variable or in a global file:

# linstor --controllers=alice node list
# LS_CONTROLLERS=alice linstor node list
# FIXME add info about /etc/file...

FIXME describe how to specify multiple controllers

Adding nodes to your cluster

The next step is to add nodes to your LINSTOR cluster. You need to provide:

  1. A node name which must match the output of uname -n

  2. The IP address of the node.

# linstor node create bravo

When you use linstor node list you will see that the new node is marked as offline. Now start the linstor-satellite on that node with:

# systemctl start linstor-satellite

About 10 seconds later you will see the status in linstor node list becoming online. Of course the satellite process may be started before the controller knows about the existence of the satellite node.

In case the node which hosts your controller should also contribute storage to the LINSTOR cluster, you have to add it as a node and start the linstor-satellite as well.

Storage pools

Storage pools identify storage in the context of LINSTOR. To group storage pools from multiple nodes, simply use the same name on each node. For example, one valid approach is to give all SSDs one name and all HDDs another.

On each host contributing storage, you need to create either an LVM VG or a ZFS zPool. The VGs and zPools identified with one LINSTOR storage pool name may have different VG or zPool names on the hosts, but do yourself a favor and use the same VG or zPool name on all nodes.

# vgcreate vg_ssd /dev/nvme0n1 /dev/nvme1n1 [...]

These then need to be registered with LINSTOR:

# linstor storage-pool create lvm alpha pool_ssd vg_ssd
# linstor storage-pool create lvm bravo pool_ssd vg_ssd
The storage pool name and common metadata is referred to as a storage pool definition. The listed commands create a storage pool definition implicitly. You can see that by using linstor storage-pool-definition list. Creating storage pool definitions explicitly is possible but not necessary.

A storage pool per backend device

In clusters where you have only one kind of storage and the capability to hot-repair storage devices, you may choose a model where you create one storage pool per physical backing device. The advantage of this model is to confine failure domains to a single storage device.

Cluster configuration


Available storage plugins

LINSTOR has three supported storage plugins as of this writing:

  • Thick LVM

  • Thin LVM with a single thin pool

  • ZFS


Creating and deploying resources/volumes

In the following scenario we assume that the goal is to create a resource 'backups' with a size of '500 GB' that is replicated among three cluster nodes.

First, we create a new resource definition:

# linstor resource-definition create backups

Second, we create a new volume definition within that resource definition:

# linstor volume-definition create backups 500G

So far we have only created objects in LINSTOR’s database, not a single LV was created on the storage nodes. Now you have the choice of delegating the task of placement to LINSTOR or doing it yourself.

Manual placement

With the resource create command you may assign a resource definition to named nodes explicitly.

# linstor resource create alpha backups --storage-pool pool_hdd
# linstor resource create bravo backups --storage-pool pool_hdd
# linstor resource create charlie backups --storage-pool pool_hdd


The value after autoplace tells LINSTOR how many replicas you want to have. The storage-pool option should be obvious.

# linstor resource create backups --auto-place 3 --storage-pool pool_hdd

Maybe not so obvious is that you may omit the --storage-pool option, then LINSTOR may select a storage pool on its own. The selection follows these rules:

  • Ignore all nodes and storage pools the current user has no access to

  • Ignore all diskless storage pools

  • Ignore all storage pools not having enough free space

From the remaining storage pools, LINSTOR currently chooses the one with the most available free space.

DRBD clients

By using the --diskless option instead of --storage-pool you can have a permanently diskless DRBD device on a node.

# linstor resource create delta backups --diskless

Volumes of one resource to different Storage-Pools

This can be achieved by setting the StorPoolName property to the volume definitions before the resource is deployed to the nodes:

# linstor resource-definition create backups
# linstor volume-definition create backups 500G
# linstor volume-definition create backups 100G
# linstor volume-definition set-property backups 0 StorPoolName pool_hdd
# linstor volume-definition set-property backups 1 StorPoolName pool_ssd
# linstor resource create alpha backups
# linstor resource create bravo backups
# linstor resource create charlie backups
Since the volume-definition create command is used without the --vlmnr option LINSTOR assigned the volume numbers starting at 0. In the following two lines the 0 and 1 refer to these automatically assigned volume numbers.

Here the 'resource create' commands do not need a --storage-pool option. In this case LINSTOR uses a 'fallback' storage pool. Finding that storage pool, LINSTOR queries the properties of the following objects in the following order:

  • Volume definition

  • Resource

  • Resource definition

  • Node

If none of those objects contain a StorPoolName property, the controller falls back to a hardcoded 'DfltStorPool' string as a storage pool.

This also means that if you forgot to define a storage pool prior deploying a resource, you will get an error message that LINSTOR could not find the storage pool named 'DfltStorPool'.

Managing Network Interface Cards

LINSTOR can deal with multiple network interface cards (NICs) in a machine, they are called netif in LINSTOR speak.

When a satellite node is created a first netif gets created implicitly with the name default. Using the --interface-name option of the node create command you can give it a different name.

Additional NICs are created like this:

# linstor node interface create alpha 100G_nic
# linstor node interface create alpha 10G_nic

NICs are identified by the IP address only, the name is arbitrary and is not related to the interface name used by Linux. The NICs can be assigned to storage pools so that whenever a resource is created in such a storage pool, the DRBD traffic will be routed through the specified NIC.

# linstor storage-pool set-property alpha pool_hdd PrefNic 10G_nic
# linstor storage-pool set-property alpha pool_ssd PrefNic 100G_nic

FIXME describe how to route the controller <-> client communication through a specific netif.

Encrypted volumes

LINSTOR can handle transparent encryption of drbd volumes. dm-crypt is used to encrypt the provided storage from the storage device.

Basic steps to use encryption:

  1. Disable user security on the controller (this will be obsolete once authentication works)

  2. Create a master passphrase

  3. Create a volume definition with the --encrypt option

  4. Don’t forget to re-enter the master passphrase after a controller restart.

Disable user security

Disabling the user security on the Linstor controller is a one time operation and is afterwards persisted.

  1. Stop the running linstor-controller via systemd: systemctl stop linstor-controller

  2. Start a linstor-controller in debug mode: /usr/share/linstor-server/bin/Controller -c /etc/linstor -d

  3. In the debug console enter: setSecLvl secLvl(NO_SECURITY)

  4. Stop linstor-controller with the debug shutdown command: shutdown

  5. Start the controller again with systemd: systemctl start linstor-controller

Encrypt commands

Below are details about the commands.

Before LINSTOR can encrypt any volume a master passphrase needs to be created. This can be done with the linstor-client.

# linstor encryption create-passphrase

crypt-create-passphrase will wait for the user to input the initial master passphrase (as all other crypt commands will with no arguments).

If you ever want to change the master passphrase this can be done with:

# linstor encryption modify-passphrase

To mark which volumes should be encrypted you have to add a flag while creating a volume definition, the flag is is --encrypt e.g.:

# linstor volume-definition create crypt_rsc 1G --encrypt

To enter the master passphrase (after controller restart) use the following command:

# linstor encryption enter-passphrase
Whenever the linstor-controller is restarted, the user has to send the master passphrase to the controller, otherwise LINSTOR is unable to reopen or create encrypted volumes.

Managing snapshots

Snapshots are supported with thin LVM and ZFS storage pools.

Creating a snapshot

Assuming a resource definition named 'resource1' which has been placed on some nodes, a snapshot can be created as follows:

# linstor snapshot create resource1 snap1

This will create snapshots on all nodes where the resource is present. LINSTOR will ensure that consistent snapshots are taken even when the resource is in active use.

Restoring a snapshot

The following steps restore a snapshot to a new resource. This is possible even when the original resource has been removed from the nodes where the snapshots were taken.

First define the new resource with volumes matching those from the snapshot:

# linstor resource-definition create resource2
# linstor snapshot volume-definition restore --from-resource resource1 --from-snapshot snap1 --to-resource resource2

At this point, additional configuration can be applied if necessary. Then, when ready, create resources based on the snapshots:

# linstor snapshot resource restore --from-resource resource1 --from-snapshot snap1 --to-resource resource2

This will place the new resource on all nodes where the snapshot is present. The nodes on which to place the resource can also be selected explicitly; see the help (linstor snapshot resource restore -h).

Removing a snapshot

An existing snapshot can be removed as follows:

# linstor snapshot delete resource1 snap1

Checking the state of your cluster

LINSTOR provides various commands to check the state of your cluster. These commands start with a 'list-' prefix and provide various filtering and sorting options. The '--groupby' option can be used to group and sort the output in multiple dimensions.

# linstor node list
# linstor storage-pool list --groupby Size

Setting options for resources

DRBD options are set using LINSTOR commands. Configuration in files such as /etc/drbd.d/global_common.conf that are not managed by LINSTOR will be ignored. The following commands show the usage and available options:

# linstor controller drbd-options -h
# linstor resource-definition drbd-options -h
# linstor volume-definition drbd-options -h
# linstor resource drbd-peer-options -h

For instance, it is easy to set the DRBD protocol for a resource named backups:

# linstor resource-definition drbd-options --protocol C backups

Rebalancing data with LINSTOR


External database

It is possible to have LINSTOR working with an external database provider like Postgresql or MariaDB. To use an external database there are a few additional steps to configure.

  1. The JDBC database driver for your database needs to be downloaded and installed to the LINSTOR library directory.

  2. The /etc/linstor/database.cfg configuration file needs to be editied for your database setup.


Postgresql JDBC driver can be downloaded here:

And afterwards copied to: /usr/share/linstor-server/lib/

A sample Postgresql database.cfg looks like this:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE properties SYSTEM "">
  <comment>LinStor MariaDB configuration</comment>
  <entry key="user">linstor</entry>
  <entry key="password">linstor</entry>
  <entry key="connection-url">jdbc:postgresql://localhost/linstor</entry>


MariaDB JDBC driver can be downloaded here:

And afterwards copied to: /usr/share/linstor-server/lib/

A sample MariaDB database.cfg looks like this:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE properties SYSTEM "">
  <comment>LinStor MariaDB configuration</comment>
  <entry key="user">linstor</entry>
  <entry key="password">linstor</entry>
  <entry key="connection-url">jdbc:mariadb://localhost/LINSTOR?createDatabaseIfNotExist=true</entry>
The LINSTOR schema/database is created as LINSTOR so make sure the mariadb connection string refers to the LINSTOR schema, as in the example above.

Getting help


A quick way to list available commands on the command line is to type linstor.

Further information on subcommands (e.g., list-nodes) can be retrieved in two ways:

# linstor node list -h
# linstor help node list

Using the 'help' subcommand is especially helpful when LINSTOR is executed in interactive mode (linstor interactive).

One of the most helpful features of LINSTOR is its rich tab-completion, which can be used to complete basically every object LINSTOR knows about (e.g., node names, IP addresses, resource names, …​). In the following examples, we show some possible completions, and their results:

# linstor node create alpha 1<tab> # completes the IP address if hostname can be resolved
# linstor resource create b<tab> c<tab> # linstor assign-resource backups charlie

If tab-completion does not work out of the box, please try to source the appropriate file:

# source /etc/bash_completion.d/linstor # or
# source /usr/share/bash_completion/completions/linstor

For zsh shell users linstor-client can generate a zsh compilation file, that has basic support for command and argument completion.

# linstor gen-zsh-completer > /usr/share/zsh/functions/Completion/Linux/_linstor