Skip to content

Setup local ceph cluster

mas-who edited this page Mar 22, 2024 · 2 revisions
  1. Create a new project with all settings to default. Choose a good name like "ceph-cluster-demo".
  2. Inside your new project, create a storage pool to be used by the ceph cluster.
    1. Set then name for the storage pool to ceph-cluster-pool
    2. Select ZFS for the driver option
    3. Set storage pool size to 100GiB
  3. Create 3 custom storage volumes inside the storage pool you just created.
    1. Set the name of the volume to remote1
    2. Set the volume size to 20GiB
    3. Set the content type to block
    4. Accept default for all other settings
    5. Create the volume and repeat steps i. to iii. to create two more custom storage volumes remote2 and remote3
  4. Create a managed network for communication between the ceph cluster nodes
    1. Set the network type to Bridge
    2. Set the network name to ceph-network
    3. Accept default for all other settings
  5. Create 3 LXD VM instances to host the ceph cluster, for each instance:
    1. Set the instance name to ceph-node-[instance-number] i.e. ceph-node-1, ceph-node-2, ceph-node-3
    2. Select the Ubuntu 22.04LTS jammy base image for the instance (non-minimal)
    3. Set the instance type to VM
    4. Inside the Disk devices advanced settings tab
      1. Choose ceph-cluster-pool to be the rool storage for the instance. Leave Size input as empty.
      2. Attach a disk device. Select a custom storage volume from ceph-cluster-pool that you just created. e.g. remote1 volume for ceph-node-1 instance
    5. Inside the Network devices advanced settings tab, create a network device.
      1. Set the Network to ceph-network
      2. Set the device name to eth0
    6. Inside the Resource limits advanced settings tab.
      1. Set the Exposed CPU limit to 2
      2. Set memory limit to 2GB
    7. Create the instance without starting it. Repeat steps 1 to 5.
  6. Start all 3 instances created in step 5.
  7. Inside each VM instance, install microceph for deploying the ceph cluster later. For each instance:
    1. Start a terminal session for that instance
    2. Inside the terminal, enter snap install microceph and wait for the installation to complete
  8. Setup the ceph cluster with the following steps, take care to enter commands within the correct instance terminal sessions:
    1. Inside the terminal session for ceph-node-1 instance. Enter microceph init.
      1. Accept default for the listening address on ceph-node-1
      2. Enter yes to create a new ceph cluster
      3. Accept default for the system name i.e. ceph-node-1
      4. Enter yes to add additional servers to the ceph cluster
      5. Enter ceph-node-2 for the name of the additional server. This will return a token, take note of that, you will be using this token later for setting up the ceph cluster on instance ceph-node-2.
      6. Enter yes again to add another server.
      7. Enter ceph-node-3 for the name of the additional server. This will return a token, take note of that, you will be using this token later for setting up the ceph cluster on instance ceph-node-3.
      8. Press enter without any value input to carry on the setup process.
      9. Accept default to add a local disk. This will result in the following terminal output:
      Available unpartitioned disks on this system:
      +---------------+----------+------+--------------------------------------------------------+
      |     MODEL     | CAPACITY | TYPE |                          PATH                          |
      +---------------+----------+------+--------------------------------------------------------+
      | QEMU HARDDISK | 20.00GiB | scsi | /dev/disk/by-id/scsi-SQEMU_QEMU_HARDDISK_lxd_volume--1 |
      +---------------+----------+------+--------------------------------------------------------+
      
      1. Enter the detected PATH of the disk from the output of the above step and confirm.
      2. Accept default to not wipe the local disk as we just created the storage volume in step 3.
      3. Accept default to not encrypt the local disk.
      4. Press enter without any value input to complete the ceph setup on ceph-node-1
    2. Inside the terminal session for ceph-node-1 instance. Enter microceph init.
      1. Accept default for the listening address on ceph-node-2
      2. Accept default to not create a new cluster
      3. Get the token generated from step 8.1.5 and paste into the terminal to add ceph-node-2 to the ceph cluster
      4. Accept default to add a local disk. This will result in the following terminal output:
      Available unpartitioned disks on this system:
      +---------------+----------+------+--------------------------------------------------------+
      |     MODEL     | CAPACITY | TYPE |                          PATH                          |
      +---------------+----------+------+--------------------------------------------------------+
      | QEMU HARDDISK | 20.00GiB | scsi | /dev/disk/by-id/scsi-SQEMU_QEMU_HARDDISK_lxd_volume--1 |
      +---------------+----------+------+--------------------------------------------------------+
      
      1. Enter the detected PATH of the disk from the output of the above step and confirm.
      2. Accept default to not wipe the local disk as we just created the storage volume in step 3.
      3. Accept default to not encrypt the local disk.
      4. Press enter without any value input to complete the ceph setup on ceph-node-2
    3. Inside the terminal session for ceph-node-3 instance. Enter microceph init.
      1. Accept default for the listening address on ceph-node-3
      2. Accept default to not create a new cluster
      3. Get the token generated from step 8.1.7 and paste into the terminal to add ceph-node-3 to the ceph cluster
      4. Accept default to add a local disk. This will result in the following terminal output:
      Available unpartitioned disks on this system:
      +---------------+----------+------+--------------------------------------------------------+
      |     MODEL     | CAPACITY | TYPE |                          PATH                          |
      +---------------+----------+------+--------------------------------------------------------+
      | QEMU HARDDISK | 20.00GiB | scsi | /dev/disk/by-id/scsi-SQEMU_QEMU_HARDDISK_lxd_volume--1 |
      +---------------+----------+------+--------------------------------------------------------+
      
      1. Enter the detected PATH of the disk from the output of the above step and confirm.
      2. Accept default to not wipe the local disk as we just created the storage volume in step 3.
      3. Accept default to not encrypt the local disk.
      4. Press enter without any value input to complete the ceph setup on ceph-node-3
  9. Confirm that the ceph cluster is now up and running. In the terminal session for ceph-node-1, enter microceph.ceph status. This should display output that looks like that shown below:
root@ceph-node-1:~# microceph.ceph status
  cluster:
    id:     ea999901-40d2-4cf8-9e8c-b3ae2db8cba4
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph-node-1,ceph-node-2,ceph-node-3 (age 99s)
    mgr: ceph-node-1(active, since 49m), standbys: ceph-node-2, ceph-node-3
    osd: 3 osds: 3 up (since 31s), 3 in (since 33s)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 2 objects, 577 KiB
    usage:   66 MiB used, 60 GiB / 60 GiB avail
    pgs:     1 active+clean
  1. Now that the ceph cluster is up and running, we just need to integrate it with the host LXD server that is running on your machine.
    1. In the terminal session for ceph-node-1, run the following command to show the content for the ceph.conf file:
      cat /var/snap/microceph/current/conf/ceph.conf
      
      The output should look similar to that shown below:
      root@ceph-node-1:~# cat /var/snap/microceph/current/conf/ceph.conf
          # # Generated by MicroCeph, DO NOT EDIT.
          [global]
          run dir = /var/snap/microceph/793/run
          fsid = ea999901-40d2-4cf8-9e8c-b3ae2db8cba4
          mon host = 10.73.14.211,10.73.14.54,10.73.14.30
          auth allow insecure global id reclaim = false
          public addr = 10.73.14.211
          ms bind ipv4 = true
          ms bind ipv6 = false
      
          [client]
      
    2. On your host machine, create the /etc/ceph direcotry and inside of it create the ceph.conf file. Copy the terminal output generated from the previous step and paste it into this file.
    3. In the terminal session for ceph-node-1, run the following command to show the content for the ceph.client.admin.keyring file:
      cat /var/snap/microceph/current/conf/ceph.client.admin.keyring
      
      The output should look similar to that shown below:
      root@ceph-node-1:~# cat /var/snap/microceph/current/conf/ceph.client.admin.keyring
      [client.admin]
              key = AQD7EKVlHhzUFRAAncX4LpBlu8iICPIiXqTQ/g==
              caps mds = "allow *"
              caps mgr = "allow *"
              caps mon = "allow *"
              caps osd = "allow *"
      
    4. On your host machine, create the ceph.client.admin.keyring file inside /etc/ceph. Copy the terminal output generated from the previous step and paste it into this file.
    5. Give lxd permission to the /etc/ceph directory by running the following command:
      sudo chgrp lxd -R /etc/ceph
      
  2. Lastly confirm that you can create a storage pool in the ceph cluster with the CLI lxc storage create [pool-name] ceph, or alternatively use the UI to create a storage pool with the ceph driver type.