Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pacific: cephadm: june batch 1 #41684

Merged
merged 76 commits into from Jun 7, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
76 commits
Select commit Hold shift + click to select a range
d0e22f4
qa/suites/rados: include rook test in rados
liewegas May 20, 2021
9fe4924
qa/tasks/cephadm: Include bootstrap registry options for downstream
sunilkumarn417 May 19, 2021
6334558
doc/cephadm: fix prompts in service-management.rst
May 25, 2021
1400b38
mgr/cephadm: document CephadmService flags
liewegas Apr 22, 2021
c1953d5
mgr/cephadm/schedule: make placement shuffle deterministic
liewegas Apr 22, 2021
7b73ca1
mgr/cephadm: simplify
liewegas Apr 23, 2021
85fec96
mgr/cephadm/inventory: fix deleted check
liewegas Apr 23, 2021
4510f28
mgr/orchestrator: include service_name in DaemonDescription dump
liewegas Apr 23, 2021
466a158
mgr/cephadm: include service_name is generated DaemonDescription
liewegas Apr 23, 2021
eecbfa8
mgr/cephadm/inventory: store optional rank_map along with specs
liewegas Apr 23, 2021
a7ce589
mgr/cephadm: add rank[_generation] properties
liewegas Apr 23, 2021
b21977c
mgr/cephadm/schedule: assign/map ranks
liewegas Apr 23, 2021
27ae022
mgr/cephadm: make _plan show removed daemon names
liewegas Apr 23, 2021
516edc9
mgr/cephadm: support creation of daemons with ranks
liewegas Apr 23, 2021
8f2c20b
mgr/cephadm: enable ranked daemons for nfs
liewegas Apr 23, 2021
4b3ff8b
mgr/cephadm: nfs: bind ganesha to appropriate ip:port
liewegas Apr 26, 2021
9cb23c7
mgr/cephadm: nfs: add rank to grace file from mgr module
liewegas Apr 26, 2021
543def1
mgr/cephadm: nfs: shell out to rados tool for conf creation
liewegas Apr 26, 2021
2a7e1ce
mgr/cephadm: do not reconfigure daemons on deleted services
liewegas Apr 26, 2021
b1aa0d5
mgr/cephadm: ingress: support nfs
liewegas Apr 26, 2021
10a81bb
mgr/cephadm: nfs: add purge
liewegas Apr 29, 2021
131c6f5
mgr/orchestrator: add --port arg to 'orch apply nfs'
liewegas Apr 30, 2021
090c9b8
qa/tasks/vip: add 'vip.exec' task
liewegas Apr 30, 2021
8caab3d
cephadm: add -v arg to shell
liewegas May 2, 2021
49ee9d2
qa/tasks/cephadm: allow mounting volumes in shell
liewegas May 2, 2021
79d9b6b
mgr/cephadm: ingress: remove eth0 default
liewegas May 3, 2021
39e8136
python-common: fix IngressSpec yaml dump
liewegas May 5, 2021
bbc6bb9
mgr/nfs: add some type annotations
liewegas May 5, 2021
1b63f9c
mgr/nfs: delete -> rm for CLI
liewegas May 4, 2021
142e8f3
mgr/nfs: factor out ganesha pool creation
liewegas May 5, 2021
9b39818
mgr/nfs: remove 'nfs cluster update'
liewegas May 5, 2021
d242b0d
mgr/nfs: take optional virtual_ip for deploying ingress
liewegas May 4, 2021
28ca065
mgr/nfs: change 'nfs cluster info'
liewegas May 4, 2021
bc23b58
doc/cephadm/nfs: update
liewegas May 5, 2021
acd0a76
mgr/cephadm: nfs: create pool if it doesn't yet exist
liewegas May 3, 2021
8bb3721
mgr/orchestrator: default nfs pool, namespaces
liewegas May 3, 2021
2a9fe6c
cephadm: --stop-signal=SIGTERM
liewegas May 6, 2021
f218539
common/options: enable nfs module for new clusters
liewegas Jun 3, 2021
bbc0dd9
mgr/cephadm: fix logging of config/placement errors
liewegas May 6, 2021
a9e9997
mgr/cephadm: ingress: fix log msg
liewegas May 6, 2021
83aff9f
mgr/cephadm: adjust debug output for device refresh
liewegas May 6, 2021
9daa98e
mgr/nfs: take --ingress argument to 'nfs cluster create'
liewegas May 6, 2021
f2dc223
qa/suites/rados/cephadm/smoke-roleless: test nfs, nfs + ingress
liewegas Apr 30, 2021
0597fc4
doc/cephadm/nfs: document nfs+ingress
liewegas May 6, 2021
0faaa55
PendingReleaseNotes: note breaking CLI changes
liewegas May 7, 2021
5c1337c
PendingReleaseNotes: clarify deprecated
liewegas May 7, 2021
5562095
mgr/nfs: move ingress vs virtual_ip check to cluster interface
liewegas May 18, 2021
fd4f74d
doc/cephfs/fs-nfs-exports: document --ingress --virtual-ip
liewegas May 24, 2021
e1e4ac8
qa/tasks/cephfs/test_nfs: fix info test
liewegas May 7, 2021
cae5939
common/config: track the path to the conf file we loaded
liewegas May 21, 2021
f9aa8c5
mgr: expose ceph.conf path to modules
liewegas May 21, 2021
dfcb8f8
mgr/cephadm: progress item for service apply
liewegas May 21, 2021
57c4713
cephadm: manage cephadm log with logrotated
May 10, 2021
867ffc6
cephadm: raise an error when `--config` file is not found
mgfritch May 13, 2021
9f310ec
cephadm: clean-up error message
mgfritch May 13, 2021
632abcd
doc/cephadm: recommend redeploying monitoring stack daemon after chan…
adk3798 May 7, 2021
cf0801f
doc/cephadm: enrich "service status"
May 27, 2021
42b63fa
mgr/cephadm: resolve IP at 'orch host add' time
liewegas May 21, 2021
1c2a759
mgr/cephadm: use known host addr
liewegas May 21, 2021
9536839
doc/cephadm: remove any reference to the use of DNS or /etc/hosts
liewegas May 25, 2021
ad964c4
mgr/dashboard,prometheus: new method of getting mgr IP
liewegas May 25, 2021
d8687c9
mgr/cephadm: convert host addr if non-IP to IP
liewegas May 25, 2021
e1e389c
mgr/nfs: use host.addr for backend IP where possible
liewegas May 26, 2021
5cccca4
cephadm: stop passing --no-hosts to podman
liewegas May 25, 2021
7bc9fe3
mgr/cephadm: Don't call _check_host without hosts
sebastian-philipp May 11, 2021
19d0b5c
doc/cephadm: enriching "Service Specification"
May 31, 2021
0ac94d2
doc/cephadm: enriching "daemon status"
May 31, 2021
e21b3fa
doc/cephadm: s/the the/the
Jun 2, 2021
df03e65
mgr/restful: use get_mgr_ip() instead of hostname
liewegas Jun 2, 2021
9fe6382
pybind/mgr/mgr_module: make get_mgr_ip() return mgr's IP from mgrmap
liewegas Jun 2, 2021
4330cad
mgr/cephadm/inventory: do not try to resolve current mgr host
liewegas Jun 3, 2021
4910bda
mgr: Fix orch osd rm stop help message
VasishtaShastry May 10, 2021
466415d
doc: add ceph-nfs link
liewegas Jun 4, 2021
8f8174e
mgr/cephadm: Warn about OSDs to be deleted manually when deleting an …
jmolmo Mar 25, 2021
baceda2
mgr/cephadm:fix alerts sent to wrong URL
pcuzner Jun 2, 2021
bfc2cf0
cephadm: improve is_container_running()
liewegas Jun 5, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
14 changes: 14 additions & 0 deletions PendingReleaseNotes
Expand Up @@ -19,6 +19,20 @@
support an NFS export of both ``rgw`` and ``cephfs`` from a single
NFS cluster instance.

* The ``nfs cluster update`` command has been removed. You can modify
the placement of an existing NFS service (and/or its associated
ingress service) using ``orch ls --export`` and ``orch apply -i
...``.

* The ``orch apply nfs`` command no longer requires a pool or
namespace argument. We strongly encourage users to use the defaults
so that the ``nfs cluster ls`` and related commands will work
properly.

* The ``nfs cluster delete`` and ``nfs export delete`` commands are
deprecated and will be removed in a future release. Please use
``nfs cluster rm`` and ``nfs export rm`` instead.

* mgr-pg_autoscaler: Autoscaler will now start out by scaling each
pool to have a full complements of pgs from the start and will only
decrease it when other pools need more pgs due to increased usage.
Expand Down
4 changes: 2 additions & 2 deletions doc/cephadm/adoption.rst
Expand Up @@ -126,8 +126,8 @@ Adoption process

This will perform a ``cephadm check-host`` on each host before adding it;
this check ensures that the host is functioning properly. The IP address
argument is required only if DNS does not allow you to connect to each host
by its short name.
argument is recommended; if not provided, then the host name will be resolved
via DNS.

#. Verify that the adopted monitor and manager daemons are visible:

Expand Down
31 changes: 12 additions & 19 deletions doc/cephadm/host-management.rst
Expand Up @@ -37,14 +37,18 @@ To add each new host to the cluster, perform two steps:

.. prompt:: bash #

ceph orch host add *newhost* [*<label1> ...*]
ceph orch host add *<newhost>* [*<ip>*] [*<label1> ...*]

For example:

.. prompt:: bash #

ceph orch host add host2
ceph orch host add host3
ceph orch host add host2 10.10.0.102
ceph orch host add host3 10.10.0.103

It is best to explicitly provide the host IP address. If an IP is
not provided, then the host name will be immediately resolved via
DNS and that IP will be used.

One or more labels can also be included to immediately label the
new host. For example, by default the ``_admin`` label will make
Expand All @@ -53,7 +57,7 @@ To add each new host to the cluster, perform two steps:

.. prompt:: bash #

ceph orch host add host4 _admin
ceph orch host add host4 10.10.0.104 --labels _admin

.. _cephadm-removing-hosts:

Expand Down Expand Up @@ -174,21 +178,21 @@ Many hosts can be added at once using

---
service_type: host
addr: node-00
hostname: node-00
addr: 192.168.0.10
labels:
- example1
- example2
---
service_type: host
addr: node-01
hostname: node-01
addr: 192.168.0.11
labels:
- grafana
---
service_type: host
addr: node-02
hostname: node-02
addr: 192.168.0.12

This can be combined with service specifications (below) to create a cluster spec
file to deploy a whole cluster in one command. see ``cephadm bootstrap --apply-spec``
Expand Down Expand Up @@ -286,23 +290,12 @@ There are two ways to customize this configuration for your environment:
Fully qualified domain names vs bare host names
===============================================

cephadm has very minimal requirements when it comes to resolving host
names etc. When cephadm initiates an ssh connection to a remote host,
the host name can be resolved in four different ways:

- a custom ssh config resolving the name to an IP
- via explicitly providing an IP address to cephadm: ``ceph orch host add <hostname> <IP>``
- automatic name resolution via DNS.

Ceph itself uses the command ``hostname`` to determine the name of the
current host.

.. note::

cephadm demands that the name of the host given via ``ceph orch host add``
equals the output of ``hostname`` on remote hosts.

Otherwise cephadm can't be sure, the host names returned by
Otherwise cephadm can't be sure that names returned by
``ceph * metadata`` match the hosts known to cephadm. This might result
in a :ref:`cephadm-stray-host` warning.

Expand Down
11 changes: 11 additions & 0 deletions doc/cephadm/monitoring.rst
Expand Up @@ -153,6 +153,17 @@ For example

ceph config set mgr mgr/cephadm/container_image_prometheus prom/prometheus:v1.4.1

If there were already running monitoring stack daemon(s) of the type whose
image you've changed, you must redeploy the daemon(s) in order to have them
actually use the new image.

For example, if you had changed the prometheus image

.. prompt:: bash #

ceph orch redeploy prometheus


.. note::

By setting a custom image, the default value will be overridden (but not
Expand Down
87 changes: 69 additions & 18 deletions doc/cephadm/nfs.rst
@@ -1,32 +1,35 @@
.. _deploy-cephadm-nfs-ganesha:

===========
NFS Service
===========

.. note:: Only the NFSv4 protocol is supported.

.. _deploy-cephadm-nfs-ganesha:
The simplest way to manage NFS is via the ``ceph nfs cluster ...``
commands; see :ref:`cephfs-nfs`. This document covers how to manage the
cephadm services directly, which should only be necessary for unusual NFS
configurations.

Deploying NFS ganesha
=====================

Cephadm deploys NFS Ganesha using a pre-defined RADOS *pool*
and optional *namespace*.
Cephadm deploys NFS Ganesha daemon (or set of daemons). The configuration for
NFS is stored in the ``nfs-ganesha`` pool and exports are managed via the
``ceph nfs export ...`` commands and via the dashboard.

To deploy a NFS Ganesha gateway, run the following command:

.. prompt:: bash #

ceph orch apply nfs *<svc_id>* *<pool>* *<namespace>* --placement="*<num-daemons>* [*<host1>* ...]"
ceph orch apply nfs *<svc_id>* [--port *<port>*] [--placement ...]

For example, to deploy NFS with a service id of *foo* that will use the RADOS
pool *nfs-ganesha* and the namespace *nfs-ns*, run this command:
For example, to deploy NFS with a service id of *foo* on the default
port 2049 with the default placement of a single daemon:

.. prompt:: bash #

ceph orch apply nfs foo nfs-ganesha nfs-ns

.. note::
If the *nfs-ganesha* pool doesn't exist, create it.
ceph orch apply nfs foo

See :ref:`orchestrator-cli-placement-spec` for the details of the placement
specification.
Expand All @@ -36,9 +39,6 @@ Service Specification

Alternatively, an NFS service can be applied using a YAML specification.

A service of type ``nfs`` requires a pool name and can contain
an optional namespace:

.. code-block:: yaml

service_type: nfs
Expand All @@ -48,15 +48,66 @@ an optional namespace:
- host1
- host2
spec:
pool: mypool
namespace: mynamespace
port: 12345

In this example, ``pool`` is a RADOS pool where NFS client recovery data is
stored and ``namespace`` is a RADOS namespace where NFS client recovery data
is stored.
In this example, we run the server on the non-default ``port`` of
12345 (instead of the default 2049) on ``host1`` and ``host2``.

The specification can then be applied by running the following command:

.. prompt:: bash #

ceph orch apply -i nfs.yaml


High-availability NFS
=====================

Deploying an *ingress* service for an existing *nfs* service will provide:

* a stable, virtual IP that can be used to access the NFS server
* fail-over between hosts if there is a host failure
* load distribution across multiple NFS gateways (although this is rarely necessary)

Ingress for NFS can be deployed for an existing NFS service
(``nfs.mynfs`` in this example) with the following specification:

.. code-block:: yaml

service_type: ingress
service_id: nfs.mynfs
placement:
count: 2
spec:
backend_service: nfs.mynfs
frontend_port: 2049
monitor_port: 9000
virtual_ip: 10.0.0.123/24

A few notes:

* The *virtual_ip* must include a CIDR prefix length, as in the
example above. The virtual IP will normally be configured on the
first identified network interface that has an existing IP in the
same subnet. You can also specify a *virtual_interface_networks*
property to match against IPs in other networks; see
:ref:`ingress-virtual-ip` for more information.
* The *monitor_port* is used to access the haproxy load status
page. The user is ``admin`` by default, but can be modified by
via an *admin* property in the spec. If a password is not
specified via a *password* property in the spec, the auto-generated password
can be found with:

.. prompt:: bash #

ceph config-key get mgr/cephadm/ingress.*{svc_id}*/monitor_password

For example:

.. prompt:: bash #

ceph config-key get mgr/cephadm/ingress.nfs.myfoo/monitor_password

* The backend service (``nfs.mynfs`` in this example) should include
a *port* property that is not 2049 to avoid conflicting with the
ingress service, which could be placed on the same host(s).
21 changes: 14 additions & 7 deletions doc/cephadm/rgw.rst
Expand Up @@ -112,18 +112,21 @@ elected as master, and the virtual IP will be moved to that node.
The active haproxy acts like a load balancer, distributing all RGW requests
between all the RGW daemons available.

**Prerequisites:**
Prerequisites
-------------

* An existing RGW service, without SSL. (If you want SSL service, the certificate
should be configured on the ingress service, not the RGW service.)

**Deploy of the high availability service for RGW**
Deploying
---------

Use the command::

ceph orch apply -i <ingress_spec_file>

**Service specification file:**
Service specification
---------------------

It is a yaml format file with the following properties:

Expand Down Expand Up @@ -171,7 +174,10 @@ where the properties of this service specification are:
SSL certificate, if SSL is to be enabled. This must contain the both the certificate and
private key blocks in .pem format.

**Selecting ethernet interfaces for the virtual IP:**
.. _ingress-virtual-ip:

Selecting ethernet interfaces for the virtual IP
------------------------------------------------

You cannot simply provide the name of the network interface on which
to configure the virtual IP because interface names tend to vary
Expand Down Expand Up @@ -204,7 +210,8 @@ configuring a "dummy" IP address is an unroutable network on the correct interfa
and reference that dummy network in the networks list (see above).


**Useful hints for ingress:**
Useful hints for ingress
------------------------

* Good to have at least 3 RGW daemons
* Use at least 3 hosts for the ingress
* It is good to have at least 3 RGW daemons.
* We recommend at least 3 hosts for the ingress service.