-
Notifications
You must be signed in to change notification settings - Fork 3
Home
Welcome to the ceph-tutorial wiki!
4 nodes:
- node-0: deploy,mon,mds, rgw
- node-[1-3]: osd (device /dev/vdb)
On each node
-
edit the file /etc/hosts:
x.x.x.x node-1
y.y.y.y node-2
....
-
create the user
ceph-deploy
-
configure passwordless login for the user ceph-deploy
On every cluster node create a "ceph" user and set to it a new password:
sudo useradd -d /home/ceph-deploy -m ceph-deploy
sudo passwd ceph-deploy
To provide full privileges to the user, on every cluster node add the following to /etc/sudoers.d/ceph:
echo "ceph-deploy ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
And change permissions in this way:
sudo chmod 0440 /etc/sudoers.d/ceph
Configure your admin node with password-less SSH access to each node running Ceph daemons (leave the passphrase empty). On your admin node node01, become ceph user and generate the ssh key:
# su - ceph-deploy $ ssh-keygen -t dsa
You will have output like this:
Generating public/private rsa key pair. Enter file in which to save the key (/home/ceph-deploy/.ssh/id_rsa): Created directory '/home/ceph-deploy/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/ceph-deploy/.ssh/id_rsa. Your public key has been saved in /home/ceph-deploy/.ssh/id_rsa.pub.
Copy the key to each cluster node and test the password-less access
On the admin node:
wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
echo deb http://download.ceph.com/debian-infernalis/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
echo deb http://download.ceph.com/debian-giant/ $(lsb_release -sc) main | sudo tee -a /etc/apt/sources.list.d/ceph.list
sudo apt-get -qqy update && sudo apt-get install -qqy ntp ceph-deploy
Note: giant repo has been added as well because ceph-deploy is missing in hammer and infernalis repositories.
mkdir cluster-ceph
cd cluster-ceph
ceph-deploy new <mon>
Add the following line in ceph.conf [global]:
osd pool default size = 3
Install ceph:
for i in {0..3}; do ceph-deploy install --release infernalis ceph-node0$i; done
Add the initial monitor(s) and gather the keys:
ceph-deploy mon create-initial
ceph-deploy@ceph-node-0:~/cluster-ceph$ ll
total 188
drwxrwxr-x 2 ceph-deploy ceph-deploy 4096 Dec 7 23:23 ./
drwxr-xr-x 5 ceph-deploy ceph-deploy 4096 Dec 7 22:41 ../
-rw------- 1 ceph-deploy ceph-deploy 71 Dec 7 23:23 ceph.bootstrap-mds.keyring
-rw------- 1 ceph-deploy ceph-deploy 71 Dec 7 23:23 ceph.bootstrap-osd.keyring
-rw------- 1 ceph-deploy ceph-deploy 71 Dec 7 23:23 ceph.bootstrap-rgw.keyring
-rw------- 1 ceph-deploy ceph-deploy 63 Dec 7 23:21 ceph.client.admin.keyring
-rw-rw-r-- 1 ceph-deploy ceph-deploy 260 Dec 7 22:29 ceph.conf
-rw-rw-r-- 1 ceph-deploy ceph-deploy 151132 Dec 7 23:23 ceph.log
-rw------- 1 ceph-deploy ceph-deploy 73 Dec 7 22:28 ceph.mon.keyring
-rw-r--r-- 1 root root 1645 Dec 7 22:30 release.asc
for i in {1..3}; do ceph-deploy disk list ceph-node0$i; done
for i in {1..3}; do ceph-deploy osd create ceph-node0$i:vdb; done
for i in {0..3}; do ceph-deploy admin ceph-node0$i; done
Ensure that you have the correct permissions for the ceph.client.admin.keyring.
sudo chmod +r /etc/ceph/ceph.client.admin.keyring
Add the RADOS GW:
ceph-deploy rgw create ceph-node-0
Add the metadata server:
ceph-deploy mds create node-0
Try to store one file into "data" pool (create it if it does not exist) using a command like this:
rados put {object-name} {file-path} --pool=data
Do this command to check if the file has been stored into the pool "data":
rados ls -p data
You can identify the object location with:
ceph osd map {pool-name} {object-name}
To mount ceph FS with FUSE, first install ceph-fuse with the command
apt-get install ceph-fuse
and then mount it, running:
ceph-fuse -m {monitor_hostname:6789} {mount_point_path}
Note that the mount_point_path
must exist before you can mount the ceph filesystem.
In our case the mountpoint is the directory /ceph-fs
that we create with
sudo mkdir /ceph-fs
Install the key and the repo:
wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
echo deb http://download.ceph.com/debian-infernalis/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
sudo apt-get update
on the glance-api
node:
apt-get install python-rbd
on the On the nova-compute
, cinder-backup
and on the cinder-volume
node:
sudo apt-get install ceph-common
On the nodes running glance-api, cinder-volume, nova-compute copy the ceph.conf file /etc/ceph/ceph.conf
ceph osd pool create volumes 128
ceph osd pool create images 128
ceph osd pool create backups 128
ceph osd pool create vms 128
ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups'
Add the keyrings for client.cinder, client.glance, and client.cinder-backup to the appropriate nodes and change their ownership:
# on the cinder-volume and nova-api servers:
ceph auth get-or-create client.cinder | sudo tee /etc/ceph/ceph.client.cinder.keyring
sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
# on the glance-api server:
ceph auth get-or-create client.glance | sudo tee /etc/ceph/ceph.client.glance.keyring
sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring
On the nova-compute
nodes you need to store the secret key of the client.cinder user in libvirt:
ceph auth get-key client.cinder | tee client.cinder.key
Then, on the compute nodes, add the secret key to libvirt and remove the temporary copy of the key:
uuidgen
457eb676-33da-42ec-9a8c-9293d545c337
cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
<uuid>457eb676-33da-42ec-9a8c-9293d545c337</uuid>
<usage type='ceph'>
<name>client.cinder secret</name>
</usage>
</secret>
EOF
sudo virsh secret-define --file secret.xml
Secret 457eb676-33da-42ec-9a8c-9293d545c337 created
sudo virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat client.cinder.key)
On every compute node, edit /etc/nova/nova.conf and add the following lines in the [DEFAULT]
section:
rbd_user=cinder
rbd_secret_uuid=457eb676-33da-42ec-9a8c-9293d545c337
Restart nova-compute:
service nova-compute restart
OpenStack requires a driver to interact with Ceph block devices. You must also specify the pool name for the block device. Edit /etc/cinder/cinder.conf enabling the new backend rbddriver in the [DEFAULT] section:
enabled_backends=...,rbddriver
[rbddriver]
volume_backend_name=RBD
volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_pool=volumes
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot=false
rbd_max_clone_depth=5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user=cinder
rbd_secret_uuid=457eb676-33da-42ec-9a8c-9293d545c337
Using crudini:
crudini --set /etc/cinder/cinder.conf rbddriver volume_backend_name RBD
crudini --set /etc/cinder/cinder.conf rbddriver volume_driver cinder.volume.drivers.rbd.RBDDriver
crudini --set /etc/cinder/cinder.conf rbddriver rbd_pool volumes
crudini --set /etc/cinder/cinder.conf rbddriver rbd_ceph_conf /etc/ceph/ceph.conf
crudini --set /etc/cinder/cinder.conf rbddriver rbd_flatten_volume_from_snapshot false
crudini --set /etc/cinder/cinder.conf rbddriver rbd_max_clone_depth 5
crudini --set /etc/cinder/cinder.conf rbddriver rbd_store_chunk_size 4
crudini --set /etc/cinder/cinder.conf rbddriver rados_connect_timeout -1
crudini --set /etc/cinder/cinder.conf rbddriver glance_api_version 2
crudini --set /etc/cinder/cinder.conf rbddriver rbd_user cinder
crudini --set /etc/cinder/cinder.conf rbddriver rbd_secret_uuid 457eb676-33da-42ec-9a8c-9293d545c337
In order to boot all the virtual machines directly into Ceph, you must configure the ephemeral backend for Nova.
Edit /etc/nova/nova.conf
[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337
disk_cachemodes="network=writeback"
hw_disk_discard = unmap # enable discard support
live_migration_flag= "VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
Using crudini:
export RBD_SECRET_UUID=
crudini --set /etc/nova/nova.conf libvirt images_type rbd
crudini --set /etc/nova/nova.conf libvirt images_rbd_pool vms
crudini --set /etc/nova/nova.conf libvirt images_rbd_ceph_conf /etc/ceph/ceph.conf
crudini --set /etc/nova/nova.conf libvirt rbd_user cinder
crudini --set /etc/nova/nova.conf libvirt rbd_secret_uuid $RBD_SECRET_UUID
crudini --set /etc/nova/nova.conf libvirt disk_cachemodes "network=writeback"
crudini --set /etc/nova/nova.conf libvirt inject_password false
crudini --set /etc/nova/nova.conf libvirt inject_key false
crudini --set /etc/nova/nova.conf libvirt inject_partition -2
crudini --set /etc/nova/nova.conf libvirt hw_disk_discard unmap
crudini --set /etc/nova/nova.conf libvirt live_migration_flag "VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
Edit /etc/glance/glance-api.conf
:
[DEFAULT]
...
default_store = rbd
...
[glance_store]
stores = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8
Using crudini:
crudini --set /etc/glance/glance-api.conf glance_store default_store rbd
crudini --set /etc/glance/glance-api.conf glance_store stores rbd
crudini --set /etc/glance/glance-api.conf glance_store rbd_store_pool images
crudini --set /etc/glance/glance-api.conf glance_store rbd_store_user glance
crudini --set /etc/glance/glance-api.conf glance_store rbd_store_ceph_conf /etc/ceph/ceph.conf
crudini --set /etc/glance/glance-api.conf glance_store rbd_store_chunk_size 8
If you want to enable copy-on-write cloning of images, also add under the [DEFAULT] section:
show_image_direct_url = True
Disable the Glance cache management to avoid images getting cached under /var/lib/glance/image-cache/, assuming your configuration file has flavor = keystone+cachemanagement:
[paste_deploy]
flavor = keystone
Recommended properties for glance images:
hw_scsi_model=virtio-scsi: add the virtio-scsi controller and get better performance and support for discard operation
hw_disk_bus=scsi: connect every cinder block devices to that controller
hw_qemu_guest_agent=yes: enable the QEMU guest agent
os_require_quiesce=yes: send fs-freeze/thaw calls through the QEMU guest agent