Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

doc: silence sphinx warnings #10621

Merged
merged 3 commits into from Aug 12, 2016
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
36 changes: 21 additions & 15 deletions doc/cephfs/troubleshooting.rst
@@ -1,22 +1,26 @@
=================
Troubleshooting
=================

Slow/stuck operations
~~~~~~~~~~~~~~~~
=====================
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this addresses SEVERE: Title level inconsistent:


If you are experiencing apparent hung operations, the first task is to identify
where the problem is occurring: in the client, the MDS, or the network connecting
them. Start by looking to see if either side has stuck operations
(:ref:`slow_requests`, below), and narrow it down from there.

RADOS Health
~~~~~~~~~~~~
============

If part of the CephFS metadata or data pools is unavaible and CephFS isn't
responding, it is probably because RADOS itself is unhealthy. Resolve those
problems first (:doc:`/rados/troubleshooting`).

The MDS
~~~~~~~
If an operation is hung inside the MDS, it will eventually show up in "ceph health",
=======

If an operation is hung inside the MDS, it will eventually show up in ``ceph health``,
identifying "slow requests are blocked". It may also identify clients as
"failing to respond" or misbehaving in other ways. If the MDS identifies
specific clients as misbehaving, you should investigate why they are doing so.
Expand All @@ -31,11 +35,12 @@ Otherwise, you have probably discovered a new bug and should report it to
the developers!

.. _slow_requests:

Slow requests (MDS)
-------------------
You can list current operations via the admin socket by running
::
ceph daemon mds.<name> dump_ops_in_flight
You can list current operations via the admin socket by running::

ceph daemon mds.<name> dump_ops_in_flight

from the MDS host. Identify the stuck commands and examine why they are stuck.
Usually the last "event" will have been an attempt to gather locks, or sending
Expand All @@ -53,13 +58,13 @@ that clients are misbehaving, either the client has a problem or its
requests aren't reaching the MDS.

ceph-fuse debugging
~~~~~~~~~~~~~~~~~~~
===================

ceph-fuse also supports dump_ops_in_flight. See if it has any and where they are
stuck.

Debug output
===================
------------

To get more debugging information from ceph-fuse, try running in the foreground
with logging to the console (``-d``) and enabling client debug
Expand All @@ -71,9 +76,10 @@ If you suspect a potential monitor issue, enable monitor debugging as well


Kernel mount debugging
~~~~~~~~~~~~~
======================

Slow requests
==============
-------------

Unfortunately the kernel client does not support the admin socket, but it has
similar (if limited) interfaces if your kernel has debugfs enabled. There
Expand Down Expand Up @@ -108,8 +114,8 @@ At the moment, the kernel client will remount the FS, but outstanding filesystem
IO may or may not be satisfied. In these cases, you may need to reboot your
client system.

You can identify you are in this situation if dmesg/kern.log report something like
::
You can identify you are in this situation if dmesg/kern.log report something like::

Jul 20 08:14:38 teuthology kernel: [3677601.123718] ceph: mds0 closed our session
Jul 20 08:14:38 teuthology kernel: [3677601.128019] ceph: mds0 reconnect start
Jul 20 08:14:39 teuthology kernel: [3677602.093378] ceph: mds0 reconnect denied
Expand Down Expand Up @@ -140,11 +146,11 @@ Mount 12 Error

A mount 12 error with ``cannot allocate memory`` usually occurs if you have a
version mismatch between the :term:`Ceph Client` version and the :term:`Ceph
Storage Cluster` version. Check the versions using::
Storage Cluster` version. Check the versions using::

ceph -v

If the Ceph Client is behind the Ceph cluster, try to upgrade it::
If the Ceph Client is behind the Ceph cluster, try to upgrade it::

sudo apt-get update && sudo apt-get install ceph-common

Expand Down
20 changes: 10 additions & 10 deletions doc/dev/cephfs-snapshots.rst
@@ -1,5 +1,5 @@
CephFS Snapshots
==============
================

CephFS supports snapshots, generally created by invoking mkdir against the
(hidden, special) .snap directory.
Expand All @@ -18,7 +18,7 @@ features that make CephFS snapshots different from what you might expect:
very fast.

Important Data Structures
-----------
-------------------------
* SnapRealm: A `SnapRealm` is created whenever you create a snapshot at a new
point in the hierarchy (or, when a snapshotted inode is moved outside of its
parent snapshot). SnapRealms contain an `sr_t srnode`, links to `past_parents`
Expand All @@ -32,7 +32,7 @@ Important Data Structures
the inode number and first `snapid` of the inode/snapshot referenced.

Creating a snapshot
----------
-------------------
To make a snapshot on directory "/1/2/3/foo", the client invokes "mkdir" on
"/1/2/3/foo/.snaps" directory. This is transmitted to the MDS Server as a
CEPH_MDS_OP_MKSNAP-tagged `MClientRequest`, and initially handled in
Expand All @@ -50,32 +50,32 @@ update the `SnapContext` they are using with that data. Note that this
*is not* a synchronous part of the snapshot creation!

Updating a snapshot
----------
-------------------
If you delete a snapshot, or move data out of the parent snapshot's hierarchy,
a similar process is followed. Extra code paths check to see if we can break
the `past_parent` links between SnapRealms, or eliminate them entirely.

Generating a SnapContext
---------
------------------------
A RADOS `SnapContext` consists of a snapshot sequence ID (`snapid`) and all
the snapshot IDs that an object is already part of. To generate that list, we
generate a list of all `snapids` associated with the SnapRealm and all its
`past_parents`.

Storing snapshot data
----------
---------------------
File data is stored in RADOS "self-managed" snapshots. Clients are careful to
use the correct `SnapContext` when writing file data to the OSDs.

Storing snapshot metadata
----------
-------------------------
Snapshotted dentries (and their inodes) are stored in-line as part of the
directory they were in at the time of the snapshot. *All dentries* include a
`first` and `last` snapid for which they are valid. (Non-snapshotted dentries
will have their `last` set to CEPH_NOSNAP).

Snapshot writeback
---------
------------------
There is a great deal of code to handle writeback efficiently. When a Client
receives an `MClientSnap` message, it updates the local `SnapRealm`
representation and its links to specific `Inodes`, and generates a `CapSnap`
Expand All @@ -88,7 +88,7 @@ process for flushing them. Dentries with outstanding `CapSnap` data is kept
pinned and in the journal.

Deleting snapshots
--------
------------------
Snapshots are deleted by invoking "rmdir" on the ".snaps" directory they are
rooted in. (Attempts to delete a directory which roots snapshots *will fail*;
you must delete the snapshots first.) Once deleted, they are entered into the
Expand All @@ -97,7 +97,7 @@ Metadata is cleaned up as the directory objects are read in and written back
out again.

Hard links
---------
----------
Hard links do not interact well with snapshots. A file is snapshotted when its
primary link is part of a SnapRealm; other links *will not* preserve data.
Generally the location where a file was first created will be its primary link,
Expand Down
2 changes: 1 addition & 1 deletion doc/install/install-ceph-gateway.rst
Expand Up @@ -167,7 +167,7 @@ directory, you will want to maintain those paths in your Ceph configuration
file if you used something other than default paths.

A typical Ceph Object Gateway configuration file for an Apache-based deployment
looks something similar as the following::
looks something similar as the following:

On Red Hat Enterprise Linux::

Expand Down
1 change: 0 additions & 1 deletion doc/radosgw/index.rst
Expand Up @@ -42,7 +42,6 @@ you may write data with one API and retrieve it with the other.
Multisite Configuration <multisite>
Config Reference <config-ref>
Admin Guide <admin>
Purging Temp Data <purge-temp>
S3 API <s3>
Swift API <swift>
Admin Ops API <adminops>
Expand Down
2 changes: 1 addition & 1 deletion doc/radosgw/swift.rst
Expand Up @@ -67,7 +67,7 @@ The following table describes the support status for current Swift functional fe
+---------------------------------+-----------------+----------------------------------------+
| **Expiring Objects** | Supported | |
+---------------------------------+-----------------+----------------------------------------+
| **Object Versioning** | Supported | |
| **Object Versioning** | Supported | |
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+---------------------------------+-----------------+----------------------------------------+
| **CORS** | Not Supported | |
+---------------------------------+-----------------+----------------------------------------+
Expand Down