Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ansible: gluster-brick-create switch from community modules to commands #46

Merged
merged 7 commits into from
Mar 2, 2022
Merged
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -7,32 +7,45 @@ Given a set of physical devices, this role creates a volume group, a thin pool a
Role Variables
--------------

| Parameters | Required | Default | Choices | Description |
| ---------- | -------- | ------- | ------- | ----------- |
|disks |yes | | | List of physical devices on server. For example /dev/sdc
|disktype |yes | |raid10, raid6, raid5, jbod | Type of the disk configuration
|diskcount |no | 1 | |Number of data disks in RAID configuration. Required only in case of RAID disk type.
|stripesize |no | 256 | |Stripe size configured at RAID controller. Value should be in KB. Required only in case of RAID disk type.
|vgname |yes | | | Name of the volume group that the disk is added to. The Volume Group will be created if not already present
|size |yes | | | Size of thinpool to be created on the volume group. Size should contain the units. For example, 100GiB
|lvname |yes | | |Name of the Logical volume created using the physical disk(s).
|ssd |yes | | |Name of the ssd device.
|cache_lvname |yes | | |Name of the Logical Volume to be used for cache.
|cache_lvsize |yes | | |Size of the cache logical volume
|mntpath |yes | | |Path to mount the filesystem.
|wipefs |no | yes |yes/no |Whether to wipe the filesystem labels if present.
|fstype |no |xfs | |Type of filesystem to create.
| Parameters | Required | Default | Choices | Description |
| ----------------- | -------- | ------- | -------------------------------- | ----------- |
| disks | yes | | | List of physical devices on server. For example /dev/sdc
| disktype | yes | | raid10, raid6, raid5, raid0, jbod | Type of the disk configuration
| diskcount | no | 1 | | Number of data disks in RAID configuration. Required only in case of RAID disk type.
| stripesize | no | 256 | | Stripe size configured at RAID controller. Value should be in KB. Required only in case of RAID disk type.
| vgname | yes | | | Name of the volume group that the disk is added to. The Volume Group will be created if not already present
| size | yes | | | Size of thinpool to be created on the volume group. Size should contain the units. For example, 100G
| lvname | yes | | | Name of the Logical volume created using the physical disk(s).
| ssd | yes | | | Name of the ssd device.
| cache_lvname | yes | | | Name of the Logical Volume to be used for cache.
| cache_lvsize | yes | | | Size of the cache logical volume
| mntpath | yes | | | Path to mount the filesystem.
| pool_metadatasize | yes | | | Size of the pools metadata, should be between 2M to 16G (Example: 24M).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just make sure it' shouldn't be 2MiB & 16GiB.

| wipefs | no | yes | yes/no | Whether to wipe the filesystem labels if present.
| fstype | no | xfs | | Type of filesystem to create.



Example Playbook to call the role
---------------------------------

```yaml
- hosts: servers
remote_user: root
roles:
- glusterfs-brick-create
- hosts: servers
remote_user: root
vars:
disks:
- /dev/sdb
disktype: none
vgname: test_vg
size: 3G
lvname: lv_name
ssd: /dev/sdc
cache_lvname: cache
cache_lvsize: 100M
mntpath: /root/mount_test
pool_metadatasize: 24M
roles:
- glusterfs-brick-create
```

License
Expand Down
Original file line number Diff line number Diff line change
@@ -1,36 +1,24 @@
---
- name: Merging cache disk with other list of disks
set_fact:
disklist: "{{ (disks|from_yaml + [ssd]) | join(',') }}"

# rc 5 = Physical volume '/dev/name' is already in volume group
- name: Setup SSD for caching | Extend the Volume Group
lvg:
state: present
vg: "{{ vgname }}"
pvs: "{{ disklist }}"
pv_options: "--dataalignment 256K"

- name: Setup SSD for caching | Change the attributes of the logical volume
lvol:
state: present
vg: "{{ vgname }}"
thinpool: "{{ lvname }}_pool"
opts: " --zero n "
command: "vgextend --dataalignment 256K {{ vgname }} {{ ssd }}"
register: resp
failed_when: resp.rc not in [0, 5]
changed_when: resp.rc == 0

# rc 5 = Physical volume '/dev/name' is already in volume group
- name: Setup SSD for caching | Create LV for cache
lvol:
state: present
vg: "{{ vgname }}"
lv: "{{ cache_lvname }}"
size: "{{ cache_lvsize }}"
command: "lvcreate -L {{ cache_lvsize }} -n {{ cache_lvname }} {{ vgname }}"
register: resp
failed_when: resp.rc not in [0, 5]
changed_when: resp.rc == 0

- name: Setup SSD for caching | Create metadata LV for cache
lvol:
state: present
vg: "{{ vgname }}"
lv: "{{ cache_meta_lv }}"
size: "{{ cache_meta_lvsize }}"
command: "lvcreate -L {{ cache_meta_lvsize }} -n {{ cache_meta_lv }} {{ vgname }}"
when: cache_meta_lv is defined and cache_meta_lv != ' '
register: resp
failed_when: resp.rc not in [0, 5]
changed_when: resp.rc == 0

- name: Setup SSD for caching | Convert logical volume to a cache pool LV
command: >
Expand All @@ -39,7 +27,11 @@
--cachemode {{ cachemode | default('writethrough') }}
"/dev/{{ vgname }}/{{ cache_lvname }}"
when: cache_meta_lv is defined and cache_meta_lv != ' '
register: resp
failed_when: resp.rc not in [0, 5]
changed_when: resp.rc == 0

# rc 5 = Command on LV name/cache does not accept LV type cachepool.
# It is valid not to have cachemetalvname! Writing a separate task not to
# complicate things.
- name: Setup SSD for caching | Convert logical volume to a cache pool LV without cachemetalvname
Expand All @@ -49,11 +41,17 @@
--cachemode {{ cachemode | default('writethrough') }}
"/dev/{{ vgname }}/{{ cache_lvname }}"
when: cache_meta_lv is not defined
register: resp
failed_when: resp.rc not in [0, 5]
changed_when: resp.rc == 0

# Run lvs -a -o +devices to see the cache settings
- name: Setup SSD for caching | Convert an existing logical volume to a cache LV
command: >
lvconvert -y --type cache --cachepool "/dev/{{ vgname }}/{{ cache_lvname }}"
"/dev/{{ vgname }}/{{ lvname }}_pool"
register: resp
failed_when: resp.rc not in [0, 5]
changed_when: resp.rc == 0
tags:
- skip_ansible_lint
Original file line number Diff line number Diff line change
@@ -1,46 +1,38 @@
---
# rc 1 = Device or resource busy
- name: Clean up filesystem signature
command: wipefs -a {{ item }}
with_items: "{{ disks | default([]) }}"
when: wipefs == 'yes' and item is defined
ignore_errors: yes
when: wipefs == 'yes' and item is defined
register: resp
failed_when: resp.rc not in [0, 1]

# Set data alignment for JBODs, by default it is 256K. This set_fact is not
# needed if we can always assume 256K for JBOD, however we provide this extra
# variable to override it.
- name: Set PV data alignment for JBOD
set_fact:
pv_dataalign: "{{ gluster_infra_dalign | default('256K') }}"
when: disktype == 'NONE' or disktype == 'RAID0'
when: disktype|upper in ['NONE', 'RAID0']

# Set data alignment for RAID
# We need KiB: ensure to keep the trailing `K' in the pv_dataalign calculation.
- name: Set PV data alignment for RAID
set_fact:
pv_dataalign: >
{{ diskcount|int *
stripesize|int }}K
when: >
disktype == 'RAID6' or
disktype == 'RAID10'
pv_dataalign: "{{ diskcount|int * stripesize|int }}K"
when: disktype|upper in ['RAID6', 'RAID10']

- name: Set VG physical extent size for RAID
set_fact:
vg_pesize: >
{{ diskcount|int *
stripesize|int }}K
when: >
disktype == 'RAID6' or
disktype == 'RAID10'
vg_pesize: "{{ diskcount|int * stripesize|int }}K"
when: disktype|upper in ['RAID6', 'RAID10']

# rc 3 = already exists in filesystem
- name: Create volume groups
lvg:
state: present
vg: "{{ vgname }}"
pvs: "{{ disks }}"
pv_options: "--dataalignment {{ pv_dataalign }}"
# pesize is 4m by default for JBODs
pesize: "{{ vg_pesize | default(4) }}"
command: "vgcreate --dataalignment {{ pv_dataalign }} -s {{ vg_pesize | default(4) }} {{ vgname }} {{ disks | join(' ') }}"
register: resp
failed_when: resp.rc not in [0, 3]
changed_when: resp.rc == 0

# Chunksize is calculated as follows for GlusterFS' optimal performance.
# RAID6:
Expand All @@ -51,50 +43,38 @@
#
- name: Calculate chunksize for RAID6/RAID10
set_fact:
lv_chunksize: >
{{ stripesize|int *
diskcount|int }}K
when: >
disktype == 'RAID6' or
disktype == 'RAID10'
lv_chunksize: "{{ stripesize|int * diskcount|int }}K"
when: disktype|upper in ['RAID6', 'RAID10']

# For JBOD the thin pool chunk size is set to 256 KiB.
- name: Set chunksize for JBOD
set_fact:
lv_chunksize: '256K'
when: disktype == 'NONE' or disktype == 'RAID0'
lv_chunksize: '256K'
when: disktype|upper in ['NONE', 'RAID0']

# rc 5 = Logical Volume 'name' already exists in volume group.
- name: Create a LV thinpool
lvol:
state: present
shrink: false
vg: "{{ vgname }}"
thinpool: "{{ lvname }}_pool"
size: 100%FREE
opts: " --chunksize {{ lv_chunksize }}
--poolmetadatasize {{ pool_metadatasize }}
--zero n"
command: "lvcreate -l 100%FREE --chunksize {{ lv_chunksize }} --poolmetadatasize {{ pool_metadatasize }} --zero n --type thin-pool --thinpool {{ lvname }}_pool {{ vgname }}"
register: resp
failed_when: resp.rc not in [0, 5]
changed_when: resp.rc == 0

# rc 5 = Logical Volume 'name' already exists in volume group.
- name: Create thin logical volume
lvol:
state: present
vg: "{{ vgname }}"
shrink: no
lv: "{{ lvname }}"
size: "{{ size }}"
thinpool: "{{ lvname }}_pool"
command: "lvcreate -T {{ vgname }}/{{ lvname }}_pool -V {{ size }} -n {{ lvname }}"
register: resp
failed_when: resp.rc not in [0, 5]
changed_when: resp.rc == 0

- include_tasks: lvmcache.yml
when: ssd is defined and ssd

# rc 1 = Filesystem already exists
- name: Create an xfs filesystem
filesystem:
fstype: "{{ fstype }}"
dev: "/dev/{{ vgname }}/{{ lvname }}"
opts: "{{ fsopts }}{{ raidopts }}"
vars:
fsopts: "-f -K -i size=512 -n size=8192"
raidopts: "{% if 'raid' in disktype %} -d sw={{ diskcount }},su={{ stripesize }}k {% endif %}"
command: "mkfs.xfs -f -K -i size=512 -n size=8192 {% if 'raid' in disktype %} -d sw={{ diskcount }},su={{ stripesize }}k {% endif %} /dev/{{ vgname }}/{{ lvname }}"
register: resp
failed_when: resp.rc not in [0, 1]
changed_when: resp.rc == 0

- name: Create the backend directory, skips if present
file:
Expand All @@ -110,8 +90,6 @@
state: mounted

- name: Set SELinux labels on the bricks
sefcontext:
target: "{{ mntpath }}"
setype: glusterd_brick_t
state: present
reload: yes
command: "chcon -t glusterd_brick_t {{ mntpath }}"
register: resp
changed_when: resp.rc == 0