sesdev
is a CLI tool to deploy Ceph clusters (both the upstream and SUSE
downstream versions).
This tool uses Vagrant behind the scenes to create the VMs and run the deployment scripts.
First, you should have both QEMU and Libvirt installed in some machine to host the VMs created by sesdev (using Vagrant behind the scenes).
Installable packages for various Linux distributions like Fedora or openSUSE can be found on the openSUSE Build Service (OBS).
$ sudo zypper -n install patterns-openSUSE-kvm_server \
patterns-server-kvm_tools bridge-utils
$ sudo systemctl enable libvirtd
$ sudo systemctl restart libvirtd
If you are running libvirt on the same machine where you installed sesdev, add your user to the "libvirt" group to avoid "no polkit agent available" errors when vagrant attempts to connect to the libvirt daemon:
$ sudo groupadd libvirt
groupadd: group 'libvirt' already exists
$ sudo usermod -a -G libvirt $USER
Log out, and then log back in. You should now be a member of the "libvirt" group.
sesdev needs Vagrant to work.
$ sudo zypper ar https://download.opensuse.org/repositories/Virtualization:/vagrant/<repo> vagrant_repo
$ sudo zypper ref
$ sudo zypper -n install vagrant vagrant-libvirt
Where <repo>
can be openSUSE_Leap_15.1
or openSUSE_Tumbleweed
.
sesdev itself can be installed either from package or from source. If you prefer to install from package, follow the instructions in this section. If you prefer to install from source, skip down to the "Install sesdev from source" section.
$ sudo zypper ar https://download.opensuse.org/repositories/filesystems:/ceph/<repo> filesystems_ceph
$ sudo zypper ref
$ sudo zypper install sesdev
Where <repo>
can be openSUSE_Leap_15.1
, openSUSE_Leap_15.2
or openSUSE_Tumbleweed
.
At this point, sesdev should be installed and ready to use: refer to the "Usage" chapter, below, for further information.
$ sudo dnf install qemu-common qemu-kvm libvirt-daemon-kvm \
libvirt-daemon libvirt-daemon-driver-qemu vagrant-libvirt
$ sudo systemctl enable libvirtd
$ sudo systemctl restart libvirtd
$ sudo dnf config-manager --add-repo \
https://download.opensuse.org/repositories/filesystems:/ceph/<distro>/filesystems:ceph.repo
dnf install sesdev
Where <distro>
can be either Fedora_29
or Fedora_30
.
At this point, sesdev should be installed and ready to use: refer to the "Usage" chapter, below, for further information.
sesdev itself can be installed either from package or from source. If you prefer to install from source, follow the instructions in this section. If you prefer to install from package, scroll up to the "Install sesdev from package" section for your operating system.
sesdev uses the libvirt API Python bindings, and these cannot be installed via pip unless the RPM packages "gcc", "python3-devel", and "libvirt-devel" are installed, first. Also, in order to clone the sesdev git repo, the "git-core" package is needed. So, before proceeding, make sure that all of these packages are installed in the system:
$ sudo zypper -n install gcc git-core libvirt-devel python3-devel
Now you can proceed to clone the sesdev source code repo, create and activate a virtualenv, and install sesdev's Python dependencies in it:
$ git clone https://github.com/SUSE/sesdev.git
$ cd sesdev
$ virtualenv venv
$ source venv/bin/activate
$ pip install --editable .
Remember to re-run pip install --editable .
after each git pull.
At this point, sesdev should be installed and ready to use: refer to the "Usage" chapter, below, for further information.
If you are preparing a code change for submission and would like to run it through the linter, install the "tox" and "pylint" packages in your system, first:
zypper -n install python3-tox python3-pylint
Then, execute the following command in the top-level of your local git clone:
tox -elint
Run sesdev --help
or sesdev <command> --help
to get the available
options and description of the commands.
To create a single node Ceph cluster based on nautilus/leap-15.1 on your local system, run the following command:
$ sesdev create nautilus --single-node mini
The mini
argument is the ID of the deployment. You can create many deployments
by giving them different IDs.
If you would like to start the cluster VMs on a remote server via libvirt/SSH,
create a configuration file $HOME/.sesdev/config.yaml
with the following
content:
libvirt_use_ssh: true
libvirt_user: <ssh_user>
libvirt_private_key_file: <private_key_file> # defaults to $HOME/.ssh/id_rsa
libvirt_host: <hostname|ip address>
Note that passwordless SSH access to this user@host combination needs to be configured and enabled.
To create a multi-node Ceph cluster, you can specify the nodes and their roles
using the --roles
option.
The roles of each node are grouped in square brackets, separated by commas. The nodes are separated by commas, too.
The following roles can be assigned:
admin
- The admin node, running management components like the Salt master or openATTIC (SES5 only)client
- Various Ceph client utilitiesganesha
- NFS Ganesha servicegrafana
- Grafana metrics visualization (requires Prometheus)igw
- iSCSI target gatewaymds
- CephFS MDSmgr
- Ceph Manager instancemon
- Ceph Monitor instanceprometheus
- Prometheus monitoringrgw
- Ceph Object Gatewaystorage
- OSD storage daemonsuma
- SUSE Manager (octopus only)
The following example will generate a cluster with 4 nodes: the admin node that is running the salt-master and one MON, two storage nodes that will also run a MON, a MGR and an MDS, and another node that will run an iSCSI gateway, nfs-ganesha gateway, and an RGW gateway.
$ sesdev create nautilus --roles="[admin, mon], [storage, mon, mgr, mds], \
[storage, mon, mgr, mds], [igw, ganesha, rgw]" big_cluster
If you have the URL(s) of custom zypper repo(s) that you would like to add
to all the nodes of the cluster prior to deployment, add one or more
--repo
options to the command line, e.g.:
$ sesdev create nautilus --single-node --repo [URL_OF_REPO] mini
By default, the custom repo(s) will be added with an elevated priority,
to ensure that packages from these repos will be installed even if higher
RPM versions of those packages exist. If this behavior is not desired,
add --no-repo-priority
to disable it.
$ sesdev list
$ sesdev ssh <deployment_id> [NODE]
Spawns an SSH shell to the admin node, or to node NODE
if explicitly
specified. You can check the existing node names with the following command:
$ sesdev show <deployment_id>
sesdev
provides a subset of scp
functionality. For details, see:
$ sesdev scp --help
It's possible to use an SSH tunnel to enble TCP port-forwarding for a service running in the cluster. Currently, the following services can be forwarded:
- dashboard - The Ceph Dashboard (nautilus and above)
- grafana - Grafana metrics dashboard
- openattic - openATTIC Ceph management UI (ses5 only)
- suma - SUSE Manager (octopus only)
$ sesdev tunnel <deployment_id> dashboard
The command will output the URL that you can use to access the dashboard.
A running cluster can be stopped by running the following command:
$ sesdev stop <deployment_id>
To remove a cluster (both the deployed VMs and the configuration), use the following command:
$ sesdev destroy <deployment_id>
This section describes some common pitfalls and how to resolve them.
After deleting the ~/.sesdev
directory, sesdev create
fails because
Vagrant throws an error message containing the words "domain about to create is
already taken".
As described
here,
this typically occurs when the ~/.sesdev
directory is deleted. The libvirt
environment still has the domains, etc. whose metadata was deleted, and Vagrant
does not recognize the existing VM as one it created, even though the name is
identical.
As described here, this can be resolved by manually deleting all the domains (VMs) and volumes associated with the old deployment:
$ sudo virsh list --all
$ # see the names of the "offending" machines. For each, do:
$ sudo virsh destroy <THE_MACHINE>
$ sudo virsh undefine <THE_MACHINE>
$ sudo virsh vol-list default
$ # For each of the volumes associated with one of the deleted machines, do:
$ sudo virsh vol-delete --pool default <THE_VOLUME>