Skip to content

Commit

Permalink
Merge branch 'master' of https://github.com/ceph/ceph into wip-faster…
Browse files Browse the repository at this point in the history
…-log-fixed
  • Loading branch information
Adam Kupczyk committed Nov 2, 2015
2 parents d65f3af + 2d56728 commit 55f5a3c
Show file tree
Hide file tree
Showing 35 changed files with 1,210 additions and 258 deletions.
3 changes: 2 additions & 1 deletion .mailmap
Expand Up @@ -264,7 +264,8 @@ Xiaowei Chen <chen.xiaowei@h3c.com> <cxwshawn@gmail.com>
Xiaowei Chen <chen.xiaowei@h3c.com> <chen.xiaowei@h3c.com>
Xie Rui <875016668@qq.com> Jerry7X <875016668@qq.com>
Xie Rui <875016668@qq.com> Xie Rui <jerry.xr86@gmail.com>
Xie Xiexingguo <xie.xiexingguo@zte.com.cn> <258156334@qq.com>
Xie Xingguo <xie.xingguo@zte.com.cn> <258156334@qq.com>
Xie Xingguo <xie.xingguo@zte.com.cn> <xie.xiexingguo@zte.com.cn>
Xingyi Wu <wuxingyi2015@outlook.com>
Xinze Chi <xinze@xsky.com> <xmdxcxz@gmail.com>
Xinze Chi <xinze@xsky.com> <xinze@xksy.com>
Expand Down
2 changes: 1 addition & 1 deletion .organizationmap
Expand Up @@ -476,7 +476,7 @@ Yahoo! <contact@yahoo-inc.com> Xihui He <xihuihe@gmail.com>
Yahoo! <contact@yahoo-inc.com> Zhi (David) Zhang <zhangz@yahoo-inc.com>
YouScribe <contact@youscribe.fr> Guilhem Lettron <guilhem@lettron.fr>
ZTE <contact@zte.com.cn> Ren Huanwen <ren.huanwen@zte.com.cn>
ZTE <contact@zte.com.cn> <xie.xiexingguo@zte.com.cn>
ZTE <contact@zte.com.cn> <xie.xingguo@zte.com.cn>
#
# Local Variables:
# compile-command: "git log --pretty='%aN <%aE>' | \
Expand Down
2 changes: 1 addition & 1 deletion doc/install/get-packages.rst
Expand Up @@ -483,5 +483,5 @@ line to get the short codename. ::


.. _Install Ceph Object Storage: ../install-storage-cluster
.. _the testing Debian repository: http://ceph.com/debian-testing/dists
.. _the testing Debian repository: http://download.ceph.com/debian-testing/dists
.. _the gitbuilder page: http://gitbuilder.ceph.com
6 changes: 3 additions & 3 deletions doc/install/install-storage-cluster.rst
Expand Up @@ -42,7 +42,7 @@ To install Ceph with RPMs, execute the following steps:

[ceph]
name=Ceph packages for $basearch
baseurl=http://ceph.com/rpm-{ceph-release}/{distro}/$basearch
baseurl=http://download.ceph.com/rpm-{ceph-release}/{distro}/$basearch
enabled=1
priority=2
gpgcheck=1
Expand All @@ -51,7 +51,7 @@ To install Ceph with RPMs, execute the following steps:

[ceph-noarch]
name=Ceph noarch packages
baseurl=http://ceph.com/rpm-{ceph-release}/{distro}/noarch
baseurl=http://download.ceph.com/rpm-{ceph-release}/{distro}/noarch
enabled=1
priority=2
gpgcheck=1
Expand All @@ -60,7 +60,7 @@ To install Ceph with RPMs, execute the following steps:

[ceph-source]
name=Ceph source packages
baseurl=http://ceph.com/rpm-{ceph-release}/{distro}/SRPMS
baseurl=http://download.ceph.com/rpm-{ceph-release}/{distro}/SRPMS
enabled=0
priority=2
gpgcheck=1
Expand Down
30 changes: 15 additions & 15 deletions doc/install/upgrading-ceph.rst
Expand Up @@ -80,7 +80,7 @@ When upgrading from Argonaut to Bobtail, you need to be aware of several things:
Ensure that you update package repository paths. For example::

sudo rm /etc/apt/sources.list.d/ceph.list
echo deb http://ceph.com/debian-bobtail/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
echo deb http://download.ceph.com/debian-bobtail/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list

See the following sections for additional details.

Expand Down Expand Up @@ -162,7 +162,7 @@ Argonaut to Cuttlefish without the intermediate upgrade to Bobtail.
For example::

sudo rm /etc/apt/sources.list.d/ceph.list
echo deb http://ceph.com/debian-bobtail/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
echo deb http://download.ceph.com/debian-bobtail/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list

We recommend upgrading all monitors to Bobtail before proceeding with the
upgrade of the monitors to Cuttlefish. A mixture of Bobtail and Argonaut
Expand All @@ -186,7 +186,7 @@ replace the reference to the Bobtail repository with a reference to
the Cuttlefish repository. For example::

sudo rm /etc/apt/sources.list.d/ceph.list
echo deb http://ceph.com/debian-cuttlefish/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
echo deb http://download.ceph.com/debian-cuttlefish/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list

See `Upgrading Monitors`_ for details.

Expand All @@ -210,7 +210,7 @@ Replace any ``apt`` reference to older repositories with a reference to the
Cuttlefish repository. For example::

sudo rm /etc/apt/sources.list.d/ceph.list
echo deb http://ceph.com/debian-cuttlefish/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
echo deb http://download.ceph.com/debian-cuttlefish/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list


Monitor
Expand Down Expand Up @@ -259,7 +259,7 @@ Replace any reference to older repositories with a reference to the
Dumpling repository. For example, with ``apt`` perform the following::

sudo rm /etc/apt/sources.list.d/ceph.list
echo deb http://ceph.com/debian-dumpling/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
echo deb http://download.ceph.com/debian-dumpling/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list

With CentOS/Red Hat distributions, remove the old repository. ::

Expand All @@ -271,15 +271,15 @@ Then add a new ``ceph.repo`` repository entry with the following contents.
[ceph]
name=Ceph Packages and Backports $basearch
baseurl=http://ceph.com/rpm/el6/$basearch
baseurl=http://download.ceph.com/rpm/el6/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
.. note:: Ensure you use the correct URL for your distribution. Check the
http://ceph.com/rpm directory for your distribution.
http://download.ceph.com/rpm directory for your distribution.

.. note:: Since you can upgrade using ``ceph-deploy`` you will only need to add
the repository on Ceph Client nodes where you use the ``ceph`` command line
Expand All @@ -296,7 +296,7 @@ Replace any reference to older repositories with a reference to the
Emperor repository. For example, with ``apt`` perform the following::

sudo rm /etc/apt/sources.list.d/ceph.list
echo deb http://ceph.com/debian-emperor/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
echo deb http://download.ceph.com/debian-emperor/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list

With CentOS/Red Hat distributions, remove the old repository. ::

Expand All @@ -309,15 +309,15 @@ replace ``{distro}`` with your distribution (e.g., ``el6``, ``rhel6``, etc).
[ceph]
name=Ceph Packages and Backports $basearch
baseurl=http://ceph.com/rpm-emperor/{distro}/$basearch
baseurl=http://download.ceph.com/rpm-emperor/{distro}/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
.. note:: Ensure you use the correct URL for your distribution. Check the
http://ceph.com/rpm directory for your distribution.
http://download.ceph.com/rpm directory for your distribution.

.. note:: Since you can upgrade using ``ceph-deploy`` you will only need to add
the repository on Ceph Client nodes where you use the ``ceph`` command line
Expand Down Expand Up @@ -421,7 +421,7 @@ Replace any reference to older repositories with a reference to the
Firely repository. For example, with ``apt`` perform the following::

sudo rm /etc/apt/sources.list.d/ceph.list
echo deb http://ceph.com/debian-firefly/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
echo deb http://download.ceph.com/debian-firefly/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list

With CentOS/Red Hat distributions, remove the old repository. ::

Expand All @@ -435,7 +435,7 @@ replace ``{distro}`` with your distribution (e.g., ``el6``, ``rhel6``,
[ceph]
name=Ceph Packages and Backports $basearch
baseurl=http://ceph.com/rpm-firefly/{distro}/$basearch
baseurl=http://download.ceph.com/rpm-firefly/{distro}/$basearch
enabled=1
gpgcheck=1
type=rpm-md
Expand Down Expand Up @@ -494,7 +494,7 @@ Replace any reference to older repositories with a reference to the
Firefly repository. For example, with ``apt`` perform the following::

sudo rm /etc/apt/sources.list.d/ceph.list
echo deb http://ceph.com/debian-firefly/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
echo deb http://download.ceph.com/debian-firefly/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list

With CentOS/Red Hat distributions, remove the old repository. ::

Expand All @@ -508,15 +508,15 @@ replace ``{distro}`` with your distribution (e.g., ``el6``, ``rhel6``,
[ceph]
name=Ceph Packages and Backports $basearch
baseurl=http://ceph.com/rpm/{distro}/$basearch
baseurl=http://download.ceph.com/rpm/{distro}/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
.. note:: Ensure you use the correct URL for your distribution. Check the
http://ceph.com/rpm directory for your distribution.
http://download.ceph.com/rpm directory for your distribution.

.. note:: Since you can upgrade using ``ceph-deploy`` you will only need to add
the repository on Ceph Client nodes where you use the ``ceph`` command line
Expand Down
40 changes: 20 additions & 20 deletions doc/start/quick-start-preflight.rst
Expand Up @@ -6,12 +6,12 @@

Thank you for trying Ceph! We recommend setting up a ``ceph-deploy`` admin
:term:`node` and a 3-node :term:`Ceph Storage Cluster` to explore the basics of
Ceph. This **Preflight Checklist** will help you prepare a ``ceph-deploy``
admin node and three Ceph Nodes (or virtual machines) that will host your Ceph
Storage Cluster. Before proceeding any further, see `OS Recommendations`_ to
verify that you have a supported distribution and version of Linux. When
you use a single Linux distribution and version across the cluster, it will
make it easier for you to troubleshoot issues that arise in production.
Ceph. This **Preflight Checklist** will help you prepare a ``ceph-deploy``
admin node and three Ceph Nodes (or virtual machines) that will host your Ceph
Storage Cluster. Before proceeding any further, see `OS Recommendations`_ to
verify that you have a supported distribution and version of Linux. When
you use a single Linux distribution and version across the cluster, it will
make it easier for you to troubleshoot issues that arise in production.

In the descriptions below, :term:`Node` refers to a single machine.

Expand All @@ -38,7 +38,7 @@ For Debian and Ubuntu distributions, perform the following steps:
``emperor``, ``firefly``, etc.).
For example::

echo deb http://ceph.com/debian-{ceph-stable-release}/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
echo deb http://download.ceph.com/debian-{ceph-stable-release}/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list

#. Update your repository and install ``ceph-deploy``::

Expand All @@ -62,15 +62,15 @@ following steps:

Paste the following example code. Replace ``{ceph-release}`` with
the recent major release of Ceph (e.g., ``firefly``). Replace ``{distro}``
with your Linux distribution (e.g., ``el6`` for CentOS 6,
with your Linux distribution (e.g., ``el6`` for CentOS 6,
``el7`` for CentOS 7, ``rhel6`` for
Red Hat 6.5, ``rhel7`` for Red Hat 7, and ``fc19`` or ``fc20`` for Fedora 19
or Fedora 20. Finally, save the contents to the
or Fedora 20. Finally, save the contents to the
``/etc/yum.repos.d/ceph.repo`` file. ::

[ceph-noarch]
name=Ceph noarch packages
baseurl=http://ceph.com/rpm-{ceph-release}/{distro}/noarch
baseurl=http://download.ceph.com/rpm-{ceph-release}/{distro}/noarch
enabled=1
gpgcheck=1
type=rpm-md
Expand All @@ -89,7 +89,7 @@ following steps:
Ceph Node Setup
===============

The admin node must be have password-less SSH access to Ceph nodes.
The admin node must be have password-less SSH access to Ceph nodes.
When ceph-deploy logs in to a Ceph node as a user, that particular
user must have passwordless ``sudo`` privileges.

Expand All @@ -100,15 +100,15 @@ Install NTP
We recommend installing NTP on Ceph nodes (especially on Ceph Monitor nodes) to
prevent issues arising from clock drift. See `Clock`_ for details.

On CentOS / RHEL, execute::
On CentOS / RHEL, execute::

sudo yum install ntp ntpdate ntp-doc

On Debian / Ubuntu, execute::

sudo apt-get install ntp

Ensure that you enable the NTP service. Ensure that each Ceph Node uses the
Ensure that you enable the NTP service. Ensure that each Ceph Node uses the
same NTP time server. See `NTP`_ for details.


Expand All @@ -134,7 +134,7 @@ Create a Ceph Deploy User

The ``ceph-deploy`` utility must login to a Ceph node as a user
that has passwordless ``sudo`` privileges, because it needs to install
software and configuration files without prompting for passwords.
software and configuration files without prompting for passwords.

Recent versions of ``ceph-deploy`` support a ``--username`` option so you can
specify any user that has password-less ``sudo`` (including ``root``, although
Expand Down Expand Up @@ -179,7 +179,7 @@ monitors.
``root`` user. Leave the passphrase empty::

ssh-keygen

Generating public/private key pair.
Enter file in which to save the key (/ceph-admin/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Expand All @@ -194,9 +194,9 @@ monitors.
ssh-copy-id {username}@node2
ssh-copy-id {username}@node3

#. (Recommended) Modify the ``~/.ssh/config`` file of your ``ceph-deploy``
admin node so that ``ceph-deploy`` can log in to Ceph nodes as the user you
created without requiring you to specify ``--username {username}`` each
#. (Recommended) Modify the ``~/.ssh/config`` file of your ``ceph-deploy``
admin node so that ``ceph-deploy`` can log in to Ceph nodes as the user you
created without requiring you to specify ``--username {username}`` each
time you execute ``ceph-deploy``. This has the added benefit of streamlining
``ssh`` and ``scp`` usage. Replace ``{username}`` with the user name you
created::
Expand Down Expand Up @@ -232,10 +232,10 @@ Ensure Connectivity

Ensure connectivity using ``ping`` with short hostnames (``hostname -s``).
Address hostname resolution issues as necessary.

.. note:: Hostnames should resolve to a network IP address, not to the
loopback IP address (e.g., hostnames should resolve to an IP address other
than ``127.0.0.1``). If you use your admin node as a Ceph node, you
than ``127.0.0.1``). If you use your admin node as a Ceph node, you
should also ensure that it resolves to its hostname and IP address
(i.e., not its loopback IP address).

Expand Down
2 changes: 1 addition & 1 deletion src/CMakeLists.txt
Expand Up @@ -25,7 +25,7 @@ if(no_yasm)
else(no_yasm)
message(STATUS " we have a modern and working yasm")
execute_process(
COMMAND "arch"
COMMAND uname -m
OUTPUT_VARIABLE arch
OUTPUT_STRIP_TRAILING_WHITESPACE)
if(arch STREQUAL "x86_64")
Expand Down
1 change: 0 additions & 1 deletion src/Makefile-client.am
Expand Up @@ -23,7 +23,6 @@ ceph: ceph.in ./ceph_ver.h Makefile
sed -ie "s|@PYTHON_EXECUTABLE@|/usr/bin/env python|" $@.tmp
grep CEPH_GIT_NICE_VER ./ceph_ver.h | cut -f 3 -d " " | sed s/\"//g | xargs -I "{}" sed -ie "s/@CEPH_GIT_NICE_VER@/{}/g" $@.tmp
grep CEPH_GIT_VER ./ceph_ver.h | cut -f 3 -d " " | sed s/\"//g | xargs -I "{}" sed -ie "s/@CEPH_GIT_VER@/{}/g" $@.tmp
cat $(srcdir)/$@.in >>$@.tmp
chmod a+x $@.tmp
chmod a-w $@.tmp
mv $@.tmp $@
Expand Down
11 changes: 6 additions & 5 deletions src/ceph.in
Expand Up @@ -394,14 +394,11 @@ def new_style_command(parsed_args, cmdargs, target, sigdict, inbuf, verbose):
sig = cmd['sig']
print '{0}: {1}'.format(cmdtag, concise_sig(sig))

got_command = False

if not got_command:
if True:
if cmdargs:
# Validate input args against list of sigs
valid_dict = validate_command(sigdict, cmdargs, verbose)
if valid_dict:
got_command = True
if parsed_args.output_format:
valid_dict['format'] = parsed_args.output_format
else:
Expand All @@ -422,7 +419,11 @@ def new_style_command(parsed_args, cmdargs, target, sigdict, inbuf, verbose):
except Exception as e:
print >> sys.stderr, \
'error handling command target: {0}'.format(e)
return 1, '', ''
continue
if len(cmdargs) and cmdargs[0] == 'tell':
print >> sys.stderr, \
'Can not use \'tell\' in interactive mode.'
continue
valid_dict = validate_command(sigdict, cmdargs, verbose)
if valid_dict:
if parsed_args.output_format:
Expand Down
5 changes: 5 additions & 0 deletions src/common/config_opts.h
Expand Up @@ -617,6 +617,11 @@ OPTION(osd_recover_clone_overlap, OPT_BOOL, true) // preserve clone_overlap du
OPTION(osd_op_num_threads_per_shard, OPT_INT, 2)
OPTION(osd_op_num_shards, OPT_INT, 5)

// Set to true for testing. Users should NOT set this.
// If set to true even after reading enough shards to
// decode the object, any error will be reported.
OPTION(osd_read_ec_check_for_errors, OPT_BOOL, false) // return error if any ec shard has an error

// Only use clone_overlap for recovery if there are fewer than
// osd_recover_clone_overlap_limit entries in the overlap set
OPTION(osd_recover_clone_overlap_limit, OPT_INT, 10)
Expand Down
2 changes: 1 addition & 1 deletion src/messages/MOSDOp.h
Expand Up @@ -99,7 +99,7 @@ class MOSDOp : public Message {
return reqid;
} else {
if (!final_decode_needed)
assert(reqid.inc == client_inc); // decode() should have done this
assert(reqid.inc == (int32_t)client_inc); // decode() should have done this
return osd_reqid_t(get_orig_source(),
reqid.inc,
header.tid);
Expand Down
5 changes: 4 additions & 1 deletion src/os/FileStore.cc
Expand Up @@ -3906,6 +3906,7 @@ int FileStore::_fgetattrs(int fd, map<string,bufferptr>& aset)
dout(10) << " -ERANGE, got " << len << dendl;
if (len < 0) {
assert(!m_filestore_fail_eio || len != -EIO);
delete[] names2;
return len;
}
name = names2;
Expand All @@ -3924,8 +3925,10 @@ int FileStore::_fgetattrs(int fd, map<string,bufferptr>& aset)
if (*name) {
dout(20) << "fgetattrs " << fd << " getting '" << name << "'" << dendl;
int r = _fgetattr(fd, attrname, aset[name]);
if (r < 0)
if (r < 0) {
delete[] names2;
return r;
}
}
}
name += strlen(name) + 1;
Expand Down

0 comments on commit 55f5a3c

Please sign in to comment.