Skip to content

Commit

Permalink
add osd node is fixed
Browse files Browse the repository at this point in the history
  • Loading branch information
TonyChengTW committed Apr 11, 2019
1 parent e3aa3ab commit f739e10
Show file tree
Hide file tree
Showing 12 changed files with 308 additions and 32 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -17,3 +17,4 @@ ceph-ansible.spec
!.mergify.yml
!raw_install_python.yml
id_rsa
roles/ceph-fetch-keys_104/files/id_rsa
63 changes: 58 additions & 5 deletions README.rst
Original file line number Diff line number Diff line change
@@ -1,8 +1,61 @@
ceph-ansible
# ceph-ansible
============
Ansible playbooks for Ceph, the distributed filesystem.
## Installation Ceph Cluster:
The Ceph playbook is independence from the openstack playbook, you can download it from:
https://github.com/TonyChengTW/ceph-ansible

Please refer to our hosted documentation here: http://docs.ceph.com/ceph-ansible/master/
I've fixed some of the original Ceph playbook bugs , you can refer it from:
- https://github.com/TonyChengTW/ceph-ansible/commit/e3aa3abdee8131ac825416796441a696b3f45bdd
- https://github.com/TonyChengTW/ceph-ansible/commit/9b1ec8754544bc86db00f0d8cda612c56d9d6d7c

You can view documentation for our ``stable-*`` branches by substituting ``master`` in the link
above for the name of the branch. For example: http://docs.ceph.com/ceph-ansible/stable-3.0/
You can refer Ceph ansible readme installation guide:
https://github.com/TonyChengTW/ceph-ansible/blob/master/README.rst

Or just a quick start:
Using the same virtualenv to run Ceph ansible-playbook
```
# cd /deploy_u18
# git clone https://github.com/TonyChengTW/ceph-ansible.git
# cd ceph-ansible
# ansible-playbook -i inventory-hosts site.yml
```
The main Ceph configuration files are located in:
- ceph-ansible/group_vars/all.yml
- ceph-ansible/group_vars/osds.yml
- inventory-hosts

- `all.yml` defines every required options such as cluster name , repo source , cephx , config key path ...etc.
- `osds.yml` defines each osds device name when nodes have the same number of disks for osds
- `inventory-hosts` defines mon,osd,mgrs,clients nodes. Also different osd devices for different nodes.

For example:
```
[osds]
ceph-ctrl1 devices="['/dev/sdb']"
ceph-comp1 devices="['/dev/sdb', '/dev/sdc']"
ceph-comp2 devices="['/dev/sdb', '/dev/sdc']"
```

Once ansible-playbook excutes correctly , the next step should return to openstack ansible playbook (cd /deploy_u18/openstack-ansible/lab_staging).
Then continue to run :
`lab_staging/012_recover_hostname.yml` , and so on...


## Add OSDs

1. Modify 'inventory-hosts' file:
add inventory hosts info at the top of line (for [all]) and add each of the inventory host name and devices in [osds]

2. Also modify 'inventory-hosts' file:
edit a osd node which is not in [mons] at [keyring_copy]

3. execute ansible-playbook:
```#ansible-playbook -i inventory-hosts --limit comp3-localdisk infrastructure-playbooks/add-osd.yml```

## License

MIT / BSD

## Author Information
104 Job Bank Corp.
[tony.cheng@104.com.tw](mailto:tony.cheng@104.com.tw)
7 changes: 7 additions & 0 deletions a.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
- hosts: comp3-localdisk
gather_facts: yes
become: True
tasks:
# Edit by Tony
- import_role:
name: ceph-fetch-keys_104
39 changes: 22 additions & 17 deletions group_vars/all.yml
Original file line number Diff line number Diff line change
Expand Up @@ -295,7 +295,9 @@ rbd_cache: "true"
rbd_cache_writethrough_until_flush: "true"
rbd_concurrent_management_ops: 20

rbd_client_directories: true # this will create rbd_client_log_path and rbd_client_admin_socket_path directories with proper permissions
#rbd_client_directories: true

# this will create rbd_client_log_path and rbd_client_admin_socket_path directories with proper permissions

# Permissions for the rbd_client_log_path and
# rbd_client_admin_socket_path. Depending on your use case for Ceph
Expand All @@ -319,6 +321,7 @@ rbd_client_directories: true # this will create rbd_client_log_path and rbd_clie
# rbd_client_directory_group: "kvm"
# rbd_client_directory_mode: "0755"
#

# If you set rbd_client_directory_mode, you must use a string (e.g.,
# 'rbd_client_directory_mode: "0755"', *not*
# 'rbd_client_directory_mode: 0755', or Ansible will complain: mode
Expand Down Expand Up @@ -595,17 +598,17 @@ openstack_cinder_pool:
application: "rbd"
size: "{{ osd_pool_default_size }}"
min_size: "{{ osd_pool_default_min_size }}"
openstack_nova_pool:
name: "vms"
pg_num: "{{ osd_pool_default_pg_num }}"
pgp_num: "{{ osd_pool_default_pg_num }}"
rule_name: "replicated_rule"
type: 1
erasure_profile: ""
expected_num_objects: ""
application: "rbd"
size: "{{ osd_pool_default_size }}"
min_size: "{{ osd_pool_default_min_size }}"
#openstack_nova_pool:
# name: "vms"
# pg_num: "{{ osd_pool_default_pg_num }}"
# pgp_num: "{{ osd_pool_default_pg_num }}"
# rule_name: "replicated_rule"
# type: 1
# erasure_profile: ""
# expected_num_objects: ""
# application: "rbd"
# size: "{{ osd_pool_default_size }}"
# min_size: "{{ osd_pool_default_min_size }}"
openstack_emphemeral_pool:
name: "emphemeral"
pg_num: "{{ osd_pool_default_pg_num }}"
Expand Down Expand Up @@ -654,7 +657,7 @@ openstack_emphemeral_pool:
openstack_pools:
- "{{ openstack_glance_pool }}"
- "{{ openstack_cinder_pool }}"
- "{{ openstack_nova_pool }}"
# - "{{ openstack_nova_pool }}"
- "{{ openstack_emphemeral_pool }}"
# - "{{ openstack_gnocchi_pool }}"
# - "{{ openstack_cephfs_data_pool }}"
Expand All @@ -666,11 +669,13 @@ openstack_pools:
# By default, keys will be auto-generated.
#
openstack_keys:
- { name: client.glance, caps: { mon: "profile rbd", osd: "profile rbd pool=volumes, profile rbd pool={{ openstack_glance_pool.name }}"}, mode: "0600" }
- { name: client.cinder, caps: { mon: "profile rbd", osd: "profile rbd pool={{ openstack_cinder_pool.name }}, profile rbd pool={{ openstack_nova_pool.name }}, profile rbd pool={{ openstack_glance_pool.name }}"}, mode: "0600" }
- { name: client.emphemeral, caps: { mon: "profile rbd", osd: "profile rbd pool={{ openstack_emphemeral_pool.name }}"}, mode: "0600" }
# - { name: client.glance, caps: { mon: "profile rbd", osd: "profile rbd pool=volumes, profile rbd pool={{ openstack_glance_pool.name }}"}, mode: "0600" }
- { name: client.glance, caps: { mon: "allow r", osd: "allow class-read object_prefix rbd_children, allow rwx pool={{ openstack_glance_pool.name }}"}, mode: "0600" }
# - { name: client.cinder, caps: { mon: "profile rbd", osd: "profile rbd pool={{ openstack_cinder_pool.name }}, profile rbd pool={{ openstack_nova_pool.name }}, profile rbd pool={{ openstack_glance_pool.name }}"}, mode: "0600" }
- { name: client.cinder, caps: { mon: "allow r", osd: "allow class-read object_prefix rbd_children, allow rwx pool={{ openstack_cinder_pool.name }}, allow rwx pool={{ openstack_emphemeral_pool.name }}, allow rwx pool={{ openstack_glance_pool.name }}"}, mode: "0600" }
# - { name: client.emphemeral, caps: { mon: "profile rbd", osd: "profile rbd pool={{ openstack_emphemeral_pool.name }}"}, mode: "0600" }
# - { name: client.gnocchi, caps: { mon: "profile rbd", osd: "profile rbd pool={{ openstack_gnocchi_pool.name }}"}, mode: "0600", }
- { name: client.openstack, caps: { mon: "profile rbd", osd: "profile rbd pool={{ openstack_glance_pool.name }}, profile rbd pool={{ openstack_nova_pool.name }}, profile rbd pool={{ openstack_cinder_pool.name }}, profile rbd pool={{ openstack_emphemeral_pool.name }}"}, mode: "0600" }
# - { name: client.openstack, caps: { mon: "profile rbd", osd: "profile rbd pool={{ openstack_glance_pool.name }}, profile rbd pool={{ openstack_nova_pool.name }}, profile rbd pool={{ openstack_cinder_pool.name }}, profile rbd pool={{ openstack_emphemeral_pool.name }}"}, mode: "0600" }


###############
Expand Down
4 changes: 4 additions & 0 deletions infrastructure-playbooks/add-osd.yml
Original file line number Diff line number Diff line change
Expand Up @@ -105,6 +105,10 @@
name: ceph-container-common
when: containerized_deployment | bool

# Edit by Tony
- import_role:
name: ceph-fetch-keys_104

- import_role:
name: ceph-common
when: not containerized_deployment | bool
Expand Down
8 changes: 8 additions & 0 deletions inventory-hosts
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
ceph-ctrl1 ansible_ssh_host=10.0.0.111 ansible_ssh_user='root' ansible_ssh_private_key_file='./id_rsa' ansible_ssh_port=22
ceph-comp1 ansible_ssh_host=10.0.0.121 ansible_ssh_user='root' ansible_ssh_private_key_file='./id_rsa' ansible_ssh_port=22
ceph-comp2 ansible_ssh_host=10.0.0.122 ansible_ssh_user='root' ansible_ssh_private_key_file='./id_rsa' ansible_ssh_port=22
comp3-localdisk ansible_ssh_host=10.0.0.123 ansible_ssh_user='root' ansible_ssh_private_key_file='./id_rsa' ansible_ssh_port=22

[mons]
ceph-ctrl1
Expand All @@ -11,6 +12,7 @@ ceph-comp2
ceph-ctrl1 devices="['/dev/sdb']"
ceph-comp1 devices="['/dev/sdb', '/dev/sdc']"
ceph-comp2 devices="['/dev/sdb', '/dev/sdc']"
comp3-localdisk devices="['/dev/sdb']"

[mgrs]
ceph-ctrl1
Expand All @@ -21,3 +23,9 @@ ceph-comp2
ceph-ctrl1
ceph-comp1
ceph-comp2
comp3-localdisk

# Edit by Tony , which is used by role : ceph-fetch-keys_104
# the task: copy_osd_node_ceph_key.yml will lookup for a osd node to copy keyring
[keyring_copy]
ceph-comp1
30 changes: 21 additions & 9 deletions roles/ceph-config/templates/ceph.conf.j2
Original file line number Diff line number Diff line change
Expand Up @@ -8,21 +8,25 @@ auth service required = none
auth client required = none
auth supported = none
{% endif %}
{% if cephx %}
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
{% endif %}
#mon_osd_full_ratio = .80
#mon_osd_nearfull_ratio = .70
{% if ip_version == 'ipv6' %}
ms bind ipv6 = true
{% endif %}
{% if common_single_host_mode is defined and common_single_host_mode %}
osd crush chooseleaf type = 0
{% endif %}
{# NOTE (leseb): the blank lines in-between are needed otherwise we won't get any line break #}

{% set nb_mon = groups.get(mon_group_name, []) | length | int %}
{% set nb_client = groups.get(client_group_name, []) | length | int %}
{% set nb_osd = groups.get(osd_group_name, []) | length | int %}
{% if inventory_hostname in groups.get(client_group_name, []) and not inventory_hostname == groups.get(client_group_name, []) | first %}
{% set ceph_release = hostvars[groups[client_group_name][0]]['ceph_release'] %}
{% endif %}

{% if nb_mon > 0 and inventory_hostname in groups.get(mon_group_name, []) %}
mon initial members = {% for host in groups[mon_group_name] %}
{% if hostvars[host]['ansible_fqdn'] is defined and mon_use_fqdn -%}
Expand All @@ -41,10 +45,6 @@ fsid = {{ fsid }}
log file = /dev/null
mon cluster log file = /dev/null
{% endif %}
{% if ceph_release in ['jewel', 'kraken', 'luminous', 'mimic'] %}
{% set mon_host_v1_suffix = ":6789" %}
{% set mon_host_v2_suffix = ":3300" %}
{% endif %}
mon host = {% if nb_mon > 0 %}
{% for host in _monitor_addresses -%}
{{ host.addr }}
Expand Down Expand Up @@ -75,7 +75,6 @@ log file = {{ rbd_client_log_file }} # must be writable by QEMU and allowed by S

{% if inventory_hostname in groups.get(osd_group_name, []) %}
{% if osd_objectstore == 'filestore' %}

[osd]
osd mkfs type = {{ osd_mkfs_type }}
osd mkfs options xfs = {{ osd_mkfs_options_xfs }}
Expand Down Expand Up @@ -121,7 +120,6 @@ rgw frontends = {{ radosgw_frontend_type }} {{ 'port' if radosgw_frontend_type =
{% endif %}
{% endfor %}
{% endif %}

{% if inventory_hostname in groups.get(nfs_group_name, []) and inventory_hostname not in groups.get(rgw_group_name, []) %}
{% for host in groups[nfs_group_name] %}
{% set _rgw_hostname = hostvars[host]['rgw_hostname'] | default(hostvars[host]['ansible_hostname']) %}
Expand All @@ -133,5 +131,19 @@ log file = /var/log/ceph/{{ cluster }}-rgw-{{ hostvars[host]['ansible_hostname']
{% endif %}
{% endfor %}
{% endif %}
[client]
rbd_cache = true
rbd_cache_writethrough_until_flush = true
#admin_socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
log_file = /var/log/qemu/qemu-guest-$pid.log
rbd_concurrent_management_ops = 20

[client.glance]
keyring = /etc/ceph/ceph.client.glance.keyring

[client.cinder]
keyring = /etc/ceph/ceph.client.cinder.keyring

[mon]
mgr initial modules = dashboard
mon_allow_pool_delete = true
Loading

0 comments on commit f739e10

Please sign in to comment.