Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add CentOS stream 9 support #7432

Merged
merged 20 commits into from Feb 14, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/workflows/ansible-lint.yml
Expand Up @@ -8,9 +8,9 @@ jobs:
- name: Setup python
uses: actions/setup-python@v2
with:
python-version: '3.8'
python-version: '3.10'
architecture: x64
- run: pip install -r <(grep ansible tests/requirements.txt) ansible-lint==4.3.7 'rich>=9.5.1,<11.0.0' netaddr
- run: pip install -r <(grep ansible tests/requirements.txt) ansible-lint==6.16.0 netaddr
- run: ansible-galaxy install -r requirements.yml
- run: ansible-lint -x 106,204,205,208 -v --force-color ./roles/*/ ./infrastructure-playbooks/*.yml site-container.yml.sample site-container.yml.sample dashboard.yml
- run: ansible-playbook -i ./tests/functional/all_daemons/hosts site.yml.sample --syntax-check --list-tasks -vv
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/flake8.yml
Expand Up @@ -18,7 +18,7 @@ jobs:
- name: Setup python
uses: actions/setup-python@v2
with:
python-version: 3.8
python-version: '3.10'
architecture: x64
- run: pip install flake8
- run: flake8 --max-line-length 160 ./library/ ./module_utils/ ./plugins/filter/ ./tests/library/ ./tests/module_utils/ ./tests/plugins/filter/ ./tests/conftest.py ./tests/functional/tests/
2 changes: 1 addition & 1 deletion .github/workflows/pytest.yml
Expand Up @@ -13,7 +13,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.8, 3.9]
python-version: '3.10'
name: Python ${{ matrix.python-version }}
steps:
- uses: actions/checkout@v2
Expand Down
4 changes: 2 additions & 2 deletions ceph-ansible.spec.in
Expand Up @@ -15,8 +15,8 @@ Obsoletes: ceph-iscsi-ansible <= 1.5

BuildArch: noarch

BuildRequires: ansible >= 2.9
Requires: ansible >= 2.9
BuildRequires: ansible-core >= 2.14
Requires: ansible-core >= 2.14

%if 0%{?rhel} == 7
BuildRequires: python2-devel
Expand Down
25 changes: 14 additions & 11 deletions dashboard.yml
@@ -1,5 +1,5 @@
---
- hosts:

Check failure on line 2 in dashboard.yml

View workflow job for this annotation

GitHub Actions / build

name[play]

All plays should be named.
- "{{ mon_group_name|default('mons') }}"
- "{{ osd_group_name|default('osds') }}"
- "{{ mds_group_name|default('mdss') }}"
Expand All @@ -12,7 +12,11 @@
gather_facts: false
become: true
pre_tasks:
- import_role:

Check failure on line 15 in dashboard.yml

View workflow job for this annotation

GitHub Actions / build

fqcn[action-core]

Use FQCN for builtin module actions (import_role).

Check failure on line 15 in dashboard.yml

View workflow job for this annotation

GitHub Actions / build

name[missing]

All tasks should be named.
name: ceph-defaults
tags: ['ceph_update_config']

- name: set ceph node exporter install 'In Progress'

Check failure on line 19 in dashboard.yml

View workflow job for this annotation

GitHub Actions / build

fqcn[action-core]

Use FQCN for builtin module actions (set_stats).

Check failure on line 19 in dashboard.yml

View workflow job for this annotation

GitHub Actions / build

name[casing]

All names should start with an uppercase letter.

Check failure on line 19 in dashboard.yml

View workflow job for this annotation

GitHub Actions / build

run-once[task]

Using run_once may behave differently if strategy is set to free.
run_once: true
set_stats:
data:
Expand All @@ -21,13 +25,10 @@
start: "{{ lookup('pipe', 'date +%Y%m%d%H%M%SZ') }}"

tasks:
- import_role:
name: ceph-defaults
tags: ['ceph_update_config']
- import_role:

Check failure on line 28 in dashboard.yml

View workflow job for this annotation

GitHub Actions / build

fqcn[action-core]

Use FQCN for builtin module actions (import_role).

Check failure on line 28 in dashboard.yml

View workflow job for this annotation

GitHub Actions / build

name[missing]

All tasks should be named.
name: ceph-facts
tags: ['ceph_update_config']
- import_role:

Check failure on line 31 in dashboard.yml

View workflow job for this annotation

GitHub Actions / build

fqcn[action-core]

Use FQCN for builtin module actions (import_role).

Check failure on line 31 in dashboard.yml

View workflow job for this annotation

GitHub Actions / build

name[missing]

All tasks should be named.
name: ceph-container-engine
- import_role:
name: ceph-container-common
Expand All @@ -47,10 +48,14 @@
status: "Complete"
end: "{{ lookup('pipe', 'date +%Y%m%d%H%M%SZ') }}"

- hosts: "{{ monitoring_group_name }}"
- hosts: "{{ monitoring_group_name | default('monitoring') }}"
gather_facts: false
become: true
pre_tasks:
- import_role:
name: ceph-defaults
tags: ['ceph_update_config']

- name: set ceph grafana install 'In Progress'
run_once: true
set_stats:
Expand All @@ -60,9 +65,6 @@
start: "{{ lookup('pipe', 'date +%Y%m%d%H%M%SZ') }}"

tasks:
- import_role:
name: ceph-defaults
tags: ['ceph_update_config']
- import_role:
name: ceph-facts
tags: ['ceph_update_config']
Expand All @@ -86,10 +88,14 @@

# using groups[] here otherwise it can't fallback to the mon if there's no mgr group.
# adding an additional | default(omit) in case where no monitors are present (external ceph cluster)
- hosts: "{{ groups[mgr_group_name] | default(groups[mon_group_name]) | default(omit) }}"
- hosts: "{{ groups[mgr_group_name|default('mgrs')] | default(groups[mon_group_name|default('mons')]) | default(omit) }}"
gather_facts: false
become: true
pre_tasks:
- import_role:
name: ceph-defaults
tags: ['ceph_update_config']

- name: set ceph dashboard install 'In Progress'
run_once: true
set_stats:
Expand All @@ -99,9 +105,6 @@
start: "{{ lookup('pipe', 'date +%Y%m%d%H%M%SZ') }}"

tasks:
- import_role:
name: ceph-defaults
tags: ['ceph_update_config']
- import_role:
name: ceph-facts
tags: ['ceph_update_config']
Expand Down
9 changes: 0 additions & 9 deletions group_vars/rgws.yml.sample
Expand Up @@ -23,15 +23,6 @@ dummy:
# TUNING #
##########

# To support buckets with a very large number of objects it's
# important to split them into shards. We suggest about 100K
# objects per shard as a conservative maximum.
#rgw_override_bucket_index_max_shards: 16

# Consider setting a quota on buckets so that exceeding this
# limit will require admin intervention.
#rgw_bucket_default_quota_max_objects: 1638400 # i.e., 100K * 16

# Declaring rgw_create_pools will create pools with the given number of pgs,
# size, and type. The following are some important notes on this automatic
# pool creation:
Expand Down
2 changes: 1 addition & 1 deletion infrastructure-playbooks/cephadm-adopt.yml
Expand Up @@ -97,7 +97,7 @@
- (health_detail.stdout | default('{}', True) | from_json)['status'] == "HEALTH_WARN"
- "'POOL_APP_NOT_ENABLED' in (health_detail.stdout | default('{}', True) | from_json)['checks']"

- import_role:

Check warning on line 100 in infrastructure-playbooks/cephadm-adopt.yml

View workflow job for this annotation

GitHub Actions / build

jinja[spacing]

Jinja2 spacing could be improved: groups.get((grafana_server_group_name|default('grafana-server')), []) | length > 0 -> groups.get((grafana_server_group_name | default('grafana-server')), []) | length > 0
name: ceph-facts
tasks_from: convert_grafana_server_group_name.yml
when: groups.get((grafana_server_group_name|default('grafana-server')), []) | length > 0
Expand Down Expand Up @@ -486,7 +486,7 @@

- name: set_fact mirror_peer_found
set_fact:
mirror_peer_uuid: "{{ ((mirror_pool_info.stdout | default('{}') | from_json)['peers'] | selectattr('site_name', 'match', '^'+ceph_rbd_mirror_remote_cluster+'$') | map(attribute='uuid') | list) }}"

Check warning on line 489 in infrastructure-playbooks/cephadm-adopt.yml

View workflow job for this annotation

GitHub Actions / build

jinja[spacing]

Jinja2 spacing could be improved: {{ ((mirror_pool_info.stdout | default('{}') | from_json)['peers'] | selectattr('site_name', 'match', '^'+ceph_rbd_mirror_remote_cluster+'$') | map(attribute='uuid') | list) }} -> {{ ((mirror_pool_info.stdout | default('{}') | from_json)['peers'] | selectattr('site_name', 'match', '^' + ceph_rbd_mirror_remote_cluster + '$') | map(attribute='uuid') | list) }}

- name: remove current rbd mirror peer, add new peer into mon config store
when: mirror_peer_uuid | length > 0
Expand All @@ -511,7 +511,7 @@
loop: "{{ (quorum_status.stdout | default('{}') | from_json)['monmap']['mons'] }}"
run_once: true

- name: remove current mirror peer

Check warning on line 514 in infrastructure-playbooks/cephadm-adopt.yml

View workflow job for this annotation

GitHub Actions / build

jinja[spacing]

Jinja2 spacing could be improved: {{ admin_rbd_cmd }} mirror pool peer remove {{ ceph_rbd_mirror_pool }} {{ ((mirror_pool_info.stdout | default('{}') | from_json)['peers'] | selectattr('site_name', 'match', '^'+ceph_rbd_mirror_remote_cluster+'$') | map(attribute='uuid') | list)[0] }} -> {{ admin_rbd_cmd }} mirror pool peer remove {{ ceph_rbd_mirror_pool }} {{ ((mirror_pool_info.stdout | default('{}') | from_json)['peers'] | selectattr('site_name', 'match', '^' + ceph_rbd_mirror_remote_cluster + '$') | map(attribute='uuid') | list)[0] }}
command: "{{ admin_rbd_cmd }} mirror pool peer remove {{ ceph_rbd_mirror_pool }} {{ ((mirror_pool_info.stdout | default('{}') | from_json)['peers'] | selectattr('site_name', 'match', '^'+ceph_rbd_mirror_remote_cluster+'$') | map(attribute='uuid') | list)[0] }}"
delegate_to: "{{ groups.get(mon_group_name | default('mons'))[0] }}"
changed_when: false
Expand Down Expand Up @@ -594,7 +594,7 @@
CEPHADM_IMAGE: '{{ ceph_docker_registry }}/{{ ceph_docker_image }}:{{ ceph_docker_image_tag }}'

- name: adopt ceph mgr daemons
hosts: "{{ groups[mgr_group_name] | default(groups[mon_group_name]) }}"
hosts: "{{ groups['mgrs'] | default(groups['mons']) | default(omit) }}"
serial: 1
become: true
gather_facts: false
Expand Down
8 changes: 0 additions & 8 deletions infrastructure-playbooks/purge-cluster.yml
Expand Up @@ -512,7 +512,7 @@

- name: zap and destroy osds created by ceph-volume with lvm_volumes
ceph_volume:
data: "{{ item.data }}"

Check warning on line 515 in infrastructure-playbooks/purge-cluster.yml

View workflow job for this annotation

GitHub Actions / build

jinja[spacing]

Jinja2 spacing could be improved: {{ item.data_vg|default(omit) }} -> {{ item.data_vg | default(omit) }}

Check warning on line 515 in infrastructure-playbooks/purge-cluster.yml

View workflow job for this annotation

GitHub Actions / build

jinja[spacing]

Jinja2 spacing could be improved: {{ item.db_vg|default(omit) }} -> {{ item.db_vg | default(omit) }}

Check warning on line 515 in infrastructure-playbooks/purge-cluster.yml

View workflow job for this annotation

GitHub Actions / build

jinja[spacing]

Jinja2 spacing could be improved: {{ item.db|default(omit) }} -> {{ item.db | default(omit) }}

Check warning on line 515 in infrastructure-playbooks/purge-cluster.yml

View workflow job for this annotation

GitHub Actions / build

jinja[spacing]

Jinja2 spacing could be improved: {{ item.journal_vg|default(omit) }} -> {{ item.journal_vg | default(omit) }}

Check warning on line 515 in infrastructure-playbooks/purge-cluster.yml

View workflow job for this annotation

GitHub Actions / build

jinja[spacing]

Jinja2 spacing could be improved: {{ item.journal|default(omit) }} -> {{ item.journal | default(omit) }}

Check warning on line 515 in infrastructure-playbooks/purge-cluster.yml

View workflow job for this annotation

GitHub Actions / build

jinja[spacing]

Jinja2 spacing could be improved: {{ item.wal_vg|default(omit) }} -> {{ item.wal_vg | default(omit) }}

Check warning on line 515 in infrastructure-playbooks/purge-cluster.yml

View workflow job for this annotation

GitHub Actions / build

jinja[spacing]

Jinja2 spacing could be improved: {{ item.wal|default(omit) }} -> {{ item.wal | default(omit) }}
data_vg: "{{ item.data_vg|default(omit) }}"
journal: "{{ item.journal|default(omit) }}"
journal_vg: "{{ item.journal_vg|default(omit) }}"
Expand Down Expand Up @@ -1000,13 +1000,9 @@

- name: remove package dependencies on redhat
command: yum -y autoremove
args:
warn: no

- name: remove package dependencies on redhat again
command: yum -y autoremove
args:
warn: no
when:
ansible_facts['pkg_mgr'] == "yum"

Expand All @@ -1019,13 +1015,9 @@

- name: remove package dependencies on redhat
command: dnf -y autoremove
args:
warn: no

- name: remove package dependencies on redhat again
command: dnf -y autoremove
args:
warn: no
when:
ansible_facts['pkg_mgr'] == "dnf"
when:
Expand Down
19 changes: 11 additions & 8 deletions infrastructure-playbooks/rolling_update.yml
Expand Up @@ -133,8 +133,8 @@

- name: check ceph release being deployed
fail:
msg: "This version of ceph-ansible is intended for upgrading to Ceph Reef only."
when: "'reef' not in ceph_version.stdout.split()"
msg: "This version of ceph-ansible is intended for upgrading to Ceph Squid only."
when: "'squid' not in ceph_version.stdout.split()"


- name: upgrade ceph mon cluster
Expand All @@ -148,6 +148,8 @@
become: True
gather_facts: false
tasks:
- import_role:
name: ceph-defaults
- name: upgrade ceph mon cluster
block:
- name: remove ceph aliases
Expand All @@ -169,8 +171,6 @@
set_fact:
mon_host: "{{ groups[mon_group_name] | difference([inventory_hostname]) | last }}"

- import_role:
name: ceph-defaults
- import_role:
name: ceph-facts

Expand Down Expand Up @@ -305,6 +305,9 @@
delay: "{{ health_mon_check_delay }}"
when: containerized_deployment | bool
rescue:
- import_role:
name: ceph-defaults

- name: unmask the mon service
systemd:
name: ceph-mon@{{ ansible_facts['hostname'] }}
Expand Down Expand Up @@ -1056,15 +1059,15 @@
tasks_from: container_binary.yml

- name: container | disallow pre-reef OSDs and enable all new reef-only functionality
command: "{{ container_binary }} exec ceph-mon-{{ hostvars[groups[mon_group_name][0]]['ansible_facts']['hostname'] }} ceph --cluster {{ cluster }} osd require-osd-release reef"
command: "{{ container_binary }} exec ceph-mon-{{ hostvars[groups[mon_group_name][0]]['ansible_facts']['hostname'] }} ceph --cluster {{ cluster }} osd require-osd-release squid"
delegate_to: "{{ groups[mon_group_name][0] }}"
run_once: True
when:
- containerized_deployment | bool
- groups.get(mon_group_name, []) | length > 0

- name: non container | disallow pre-reef OSDs and enable all new reef-only functionality
command: "ceph --cluster {{ cluster }} osd require-osd-release reef"
command: "ceph --cluster {{ cluster }} osd require-osd-release squid"
delegate_to: "{{ groups[mon_group_name][0] }}"
run_once: True
when:
Expand Down Expand Up @@ -1112,7 +1115,7 @@
name: ceph-node-exporter

- name: upgrade monitoring node
hosts: "{{ monitoring_group_name }}"
hosts: "{{ monitoring_group_name|default('monitoring') }}"
tags: monitoring
gather_facts: false
become: true
Expand Down Expand Up @@ -1144,7 +1147,7 @@
name: ceph-grafana

- name: upgrade ceph dashboard
hosts: "{{ groups[mgr_group_name] | default(groups[mon_group_name]) | default(omit) }}"
hosts: "{{ groups[mgr_group_name|default('mgrs')] | default(groups[mon_group_name|default('mons')]) | default(omit) }}"
tags: monitoring
gather_facts: false
become: true
Expand Down
4 changes: 2 additions & 2 deletions infrastructure-playbooks/shrink-mds.yml
Expand Up @@ -24,7 +24,7 @@
tasks_from: container_binary

- name: perform checks, remove mds and print cluster health
hosts: "{{ groups[mon_group_name][0] }}"
hosts: mons[0]
become: true
vars_prompt:
- name: ireallymeanit
Expand Down Expand Up @@ -165,4 +165,4 @@
post_tasks:
- name: show ceph health
command: "{{ container_exec_cmd | default('') }} ceph --cluster {{ cluster }} -s"
changed_when: false
changed_when: false
4 changes: 2 additions & 2 deletions infrastructure-playbooks/shrink-mgr.yml
Expand Up @@ -21,7 +21,7 @@
msg: gather facts on all Ceph hosts for following reference

- name: confirm if user really meant to remove manager from the ceph cluster
hosts: "{{ groups[mon_group_name][0] }}"
hosts: mons[0]
become: true
vars_prompt:
- name: ireallymeanit
Expand Down Expand Up @@ -130,4 +130,4 @@
post_tasks:
- name: show ceph health
command: "{{ container_exec_cmd | default('') }} ceph --cluster {{ cluster }} -s"
changed_when: false
changed_when: false
4 changes: 2 additions & 2 deletions infrastructure-playbooks/shrink-mon.yml
Expand Up @@ -22,7 +22,7 @@
- debug: msg="gather facts on all Ceph hosts for following reference"

- name: confirm whether user really meant to remove monitor from the ceph cluster
hosts: "{{ groups[mon_group_name][0] }}"
hosts: mons[0]
become: true
vars_prompt:
- name: ireallymeanit
Expand Down Expand Up @@ -144,4 +144,4 @@
- name: show ceph mon status
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} mon stat"
delegate_to: "{{ mon_host }}"
changed_when: false
changed_when: false
6 changes: 3 additions & 3 deletions infrastructure-playbooks/shrink-osd.yml
Expand Up @@ -14,16 +14,16 @@
- name: gather facts and check the init system

hosts:
- "{{ mon_group_name|default('mons') }}"
- "{{ osd_group_name|default('osds') }}"
- mons
- osds

become: True
tasks:
- debug: msg="gather facts on all Ceph hosts for following reference"

- name: confirm whether user really meant to remove osd(s) from the cluster

hosts: "{{ groups[mon_group_name][0] }}"
hosts: mons[0]

become: true

Expand Down
6 changes: 3 additions & 3 deletions infrastructure-playbooks/shrink-rbdmirror.yml
Expand Up @@ -13,16 +13,16 @@

- name: gather facts and check the init system
hosts:
- "{{ mon_group_name|default('mons') }}"
- "{{ mon_group_name|default('rbdmirrors') }}"
- mons
- rbdmirrors
become: true
tasks:
- debug:
msg: gather facts on MONs and RBD mirrors

- name: confirm whether user really meant to remove rbd mirror from the ceph
cluster
hosts: "{{ groups[mon_group_name][0] }}"
hosts: mons[0]
become: true
vars_prompt:
- name: ireallymeanit
Expand Down