Skip to content

Commit

Permalink
Merge pull request #16224 from liewegas/wip-docs-prune
Browse files Browse the repository at this point in the history
doc: update intro, quick start docs

Reviewed-by: Alfredo Deza <adeza@redhat.com>
  • Loading branch information
alfredodeza committed Jul 11, 2017
2 parents e531270 + 917a6f9 commit 7b45532
Show file tree
Hide file tree
Showing 7 changed files with 244 additions and 303 deletions.
6 changes: 3 additions & 3 deletions doc/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ system**.
- Striped objects
- Cloud solution integration
- Multi-site deployment
- Disaster recovery
- Multi-site replication

.. raw:: html

Expand All @@ -36,7 +36,7 @@ system**.
- KVM/libvirt support
- Back-end for cloud solutions
- Incremental backup
- Disaster recovery
- Disaster recovery (multisite asynchronous replication)

.. raw:: html

Expand All @@ -46,7 +46,7 @@ system**.
- Separates metadata from data
- Dynamic rebalancing
- Subdirectory snapshots
- Configurable striping
- Configurable striping
- Kernel driver support
- FUSE support
- NFS/CIFS deployable
Expand Down
6 changes: 3 additions & 3 deletions doc/start/index.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
======================
Installation (Quick)
======================
============================
Installation (ceph-deploy)
============================

.. raw:: html

Expand Down
87 changes: 51 additions & 36 deletions doc/start/intro.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,42 +2,57 @@
Intro to Ceph
===============

Whether you want to provide :term:`Ceph Object Storage` and/or :term:`Ceph Block
Device` services to :term:`Cloud Platforms`, deploy a :term:`Ceph Filesystem` or
use Ceph for another purpose, all :term:`Ceph Storage Cluster` deployments begin
with setting up each :term:`Ceph Node`, your network and the Ceph Storage
Cluster. A Ceph Storage Cluster requires at least one Ceph Monitor and at least
two Ceph OSD Daemons. The Ceph Metadata Server is essential when running Ceph
Filesystem clients.

.. ditaa:: +---------------+ +---------------+ +---------------+
| OSDs | | Monitor | | MDS |
+---------------+ +---------------+ +---------------+

- **Ceph OSDs**: A :term:`Ceph OSD Daemon` (Ceph OSD) stores data, handles data
replication, recovery, backfilling, rebalancing, and provides some monitoring
information to Ceph Monitors by checking other Ceph OSD Daemons for a
heartbeat. A Ceph Storage Cluster requires at least two Ceph OSD Daemons to
achieve an ``active + clean`` state when the cluster makes two copies of your
data (Ceph makes 3 copies by default, but you can adjust it).

- **Monitors**: A :term:`Ceph Monitor` maintains maps of the cluster state,
including the monitor map, the OSD map, the Placement Group (PG) map, and the
CRUSH map. Ceph maintains a history (called an "epoch") of each state change
in the Ceph Monitors, Ceph OSD Daemons, and PGs.

- **MDSs**: A :term:`Ceph Metadata Server` (MDS) stores metadata on behalf of
the :term:`Ceph Filesystem` (i.e., Ceph Block Devices and Ceph Object Storage
do not use MDS). Ceph Metadata Servers make it feasible for POSIX file system
users to execute basic commands like ``ls``, ``find``, etc. without placing
an enormous burden on the Ceph Storage Cluster.

Ceph stores a client's data as objects within storage pools. Using the CRUSH
algorithm, Ceph calculates which placement group should contain the object,
and further calculates which Ceph OSD Daemon should store the placement group.
The CRUSH algorithm enables the Ceph Storage Cluster to scale, rebalance, and
recover dynamically.

Whether you want to provide :term:`Ceph Object Storage` and/or
:term:`Ceph Block Device` services to :term:`Cloud Platforms`, deploy
a :term:`Ceph Filesystem` or use Ceph for another purpose, all
:term:`Ceph Storage Cluster` deployments begin with setting up each
:term:`Ceph Node`, your network, and the Ceph Storage Cluster. A Ceph
Storage Cluster requires at least one Ceph Monitor, Ceph Manager, and
Ceph OSD (Object Storage Daemon). The Ceph Metadata Server is also
required when running Ceph Filesystem clients.

.. ditaa:: +---------------+ +------------+ +------------+ +---------------+
| OSDs | | Monitors | | Managers | | MDSs |
+---------------+ +------------+ +------------+ +---------------+

- **Monitors**: A :term:`Ceph Monitor` (``ceph-mon``) maintains maps
of the cluster state, including the monitor map, manager map, the
OSD map, and the CRUSH map. These maps are critical cluster state
required for Ceph daemons to coordinate with each other. Monitors
are also responsible for managing authentication between daemons and
clients. At least three monitors are normally required for
redundancy and high availability.

- **Managers**: A :term:`Ceph Manager` daemon (``ceph-mgr``) is
responsible for keeping track of runtime metrics and the current
state of the Ceph cluster, including storage utilization, current
performance metrics, and system load. The Ceph Manager daemons also
host python-based plugins to manage and expose Ceph cluster
information, including a web-based `dashboard`_ and `REST API`_. At
least two managers are normally required for high availability.

- **Ceph OSDs**: A :term:`Ceph OSD` (object storage daemon,
``ceph-osd``) stores data, handles data replication, recovery,
rebalancing, and provides some monitoring information to Ceph
Monitors and Managers by checking other Ceph OSD Daemons for a
heartbeat. At least 3 Ceph OSDs are normally required for redundancy
and high availability.

- **MDSs**: A :term:`Ceph Metadata Server` (MDS, ``ceph-mds``) stores
metadata on behalf of the :term:`Ceph Filesystem` (i.e., Ceph Block
Devices and Ceph Object Storage do not use MDS). Ceph Metadata
Servers allow POSIX file system users to execute basic commands (like
``ls``, ``find``, etc.) without placing an enormous burden on the
Ceph Storage Cluster.

Ceph stores data as objects within logical storage pools. Using the
:term:`CRUSH` algorithm, Ceph calculates which placement group should
contain the object, and further calculates which Ceph OSD Daemon
should store the placement group. The CRUSH algorithm enables the
Ceph Storage Cluster to scale, rebalance, and recover dynamically.

.. _dashboard: ../../mgr/dashboard
.. _REST API: ../../mgr/restful

.. raw:: html

Expand Down
32 changes: 13 additions & 19 deletions doc/start/os-recommendations.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,29 +13,23 @@ Linux Kernel

- **Ceph Kernel Client**

If you are using the kernel client, the general advice is to *track* "stable"
or "longterm maintenance" kernel series provided by either http://kernel.org
or your distribution on the kernel client machines.
If you are using the kernel client to map RBD block devices or mount
CephFS, the general advice is to use a "stable" or "longterm
maintenance" kernel series provided by either http://kernel.org or
your Linux distribution on any client hosts.

For RBD, if you choose to *track* long-term kernels, we currently recommend
4.x-based "longterm maintenance" kernel series:

- 4.9.z
- 4.4.z

These are considered pretty old, but if you must:

- 3.16.z
- 3.10.z

For CephFS, see `CephFS best practices`_ for kernel version guidance.

Older kernel client versions may not support your `CRUSH tunables`_ profile.

- **B-tree File System (Btrfs)**
Older kernel client versions may not support your `CRUSH tunables`_ profile
or other newer features of the Ceph cluster, requiring the storage cluster
to be configured with those features disabled.

We recommand *against* using ``btrfs`` with Ceph. However, if you
insist on using ``btrfs``, we recommend using a recent Linux kernel.

Platforms
=========
Expand Down Expand Up @@ -67,8 +61,8 @@ Luminous (12.2.z)
+----------+----------+--------------------+--------------+---------+------------+


Infernalis (9.2.z) and Jewel (10.2.z)
-------------------------------------
Jewel (10.2.z)
--------------

+----------+----------+--------------------+--------------+---------+------------+
| Distro | Release | Code Name | Kernel | Notes | Testing |
Expand All @@ -84,8 +78,8 @@ Infernalis (9.2.z) and Jewel (10.2.z)
| Ubuntu | 14.04 | Trusty Tahr | linux-3.13.0 | | B, I, C |
+----------+----------+--------------------+--------------+---------+------------+

Hammer (0.94)
-------------
Hammer (0.94.z)
---------------

+----------+----------+--------------------+--------------+---------+------------+
| Distro | Release | Code Name | Kernel | Notes | Testing |
Expand All @@ -101,8 +95,8 @@ Hammer (0.94)
| Ubuntu | 14.04 | Trusty Tahr | linux-3.13.0 | | B, I, C |
+----------+----------+--------------------+--------------+---------+------------+

Firefly (0.80)
--------------
Firefly (0.80.z)
----------------

+----------+----------+--------------------+--------------+---------+------------+
| Distro | Release | Code Name | Kernel | Notes | Testing |
Expand Down
Loading

0 comments on commit 7b45532

Please sign in to comment.