Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

doc: fix typos #30583

Merged
merged 1 commit into from Oct 4, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
2 changes: 1 addition & 1 deletion doc/ceph-volume/systemd.rst
Expand Up @@ -42,7 +42,7 @@ behavior:
* ``CEPH_VOLUME_SYSTEMD_TRIES``: Defaults to 30
* ``CEPH_VOLUME_SYSTEMD_INTERVAL``: Defaults to 5

The *"tries"* is a number that sets the maximum amount of times the unit will
The *"tries"* is a number that sets the maximum number of times the unit will
attempt to activate an OSD before giving up.

The *"interval"* is a value in seconds that determines the waiting time before
Expand Down
6 changes: 3 additions & 3 deletions doc/cephfs/fs-volumes.rst
Expand Up @@ -5,7 +5,7 @@ FS volumes and subvolumes

A single source of truth for CephFS exports is implemented in the volumes
module of the :term:`Ceph Manager` daemon (ceph-mgr). The OpenStack shared
file system service (manila_), Ceph Containter Storage Interface (CSI_),
file system service (manila_), Ceph Container Storage Interface (CSI_),
storage administrators among others can use the common CLI provided by the
ceph-mgr volumes module to manage the CephFS exports.

Expand Down Expand Up @@ -45,8 +45,8 @@ Create a volume using::

$ ceph fs volume create <vol_name>

This creates a CephFS file sytem and its data and metadata pools. It also tries
to create MDSes for the filesytem using the enabled ceph-mgr orchestrator
This creates a CephFS file system and its data and metadata pools. It also tries
to create MDSes for the filesystem using the enabled ceph-mgr orchestrator
module (see :doc:`/mgr/orchestrator_cli`) , e.g., rook.

Remove a volume using::
Expand Down
2 changes: 1 addition & 1 deletion doc/cephfs/mdcache.rst
Expand Up @@ -73,5 +73,5 @@ inode info to the clients.

The auth MDS for an inode can change over time as well. The MDS' will
actively balance responsibility for the inode cache amongst
themselves, but this can be overriden by **pinning** certain subtrees
themselves, but this can be overridden by **pinning** certain subtrees
to a single MDS.
2 changes: 1 addition & 1 deletion doc/cephfs/posix.rst
Expand Up @@ -92,7 +92,7 @@ conventions as other file systems.

In modern Linux kernels (v4.17 or later), writeback errors are reported
once to every file description that is open at the time of the error. In
addition, unreported errors that occured before the file description was
addition, unreported errors that occurred before the file description was
opened will also be returned on fsync.

See `PostgreSQL's summary of fsync() error reporting across operating systems
Expand Down
4 changes: 2 additions & 2 deletions doc/dev/msgr2.rst
Expand Up @@ -411,7 +411,7 @@ Example of failure scenarios:
| |


* Connection failure after session is established because server reseted,
* Connection failure after session is established because server reset,
and then client reconnects.

.. ditaa:: +---------+ +--------+
Expand All @@ -437,7 +437,7 @@ RC* means that the reset session full flag depends on the policy.resetcheck
of the connection.


* Connection failure after session is established because client reseted,
* Connection failure after session is established because client reset,
and then client reconnects.

.. ditaa:: +---------+ +--------+
Expand Down
6 changes: 3 additions & 3 deletions doc/dev/osd_internals/erasure_coding/developer_notes.rst
Expand Up @@ -207,9 +207,9 @@ set in the erasure code profile, before the pool was created.
ceph osd erasure-code-profile set myprofile \
directory=<dir> \ # mandatory
plugin=jerasure \ # mandatory
m=10 \ # optional and plugin dependant
k=3 \ # optional and plugin dependant
technique=reed_sol_van \ # optional and plugin dependant
m=10 \ # optional and plugin dependent
k=3 \ # optional and plugin dependent
technique=reed_sol_van \ # optional and plugin dependent
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think "dependant" is not a typo. it's just less commonly used than "dependent". see https://www.merriam-webster.com/dictionary/dependant and https://www.lexico.com/en/definition/dependant

@johnwilkins what do you think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tchaikov fair enough, I'm not a native speaker of english, just let me know which version you'd prefer

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

neither am i =( . let's wait for more inputs before moving forward.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd not seen "dependant" before, https://www.merriam-webster.com/dictionary/dependant#note-1 suggests it used to be used in particular cases of British English


Notes
-----
Expand Down
2 changes: 1 addition & 1 deletion doc/dev/osd_internals/erasure_coding/proposals.rst
Expand Up @@ -19,7 +19,7 @@ to their respective shards.

The choice of whether to use a read-modify-write or a
parity-delta-write is complex policy issue that is TBD in the details
and is likely to be heavily dependant on the computational costs
and is likely to be heavily dependent on the computational costs
associated with a parity-delta vs. a regular parity-generation
operation. However, it is believed that the parity-delta scheme is
likely to be the preferred choice, when available.
Expand Down
2 changes: 1 addition & 1 deletion doc/install/install-ceph-gateway.rst
Expand Up @@ -194,7 +194,7 @@ Referring back to the description for installing a Ceph Object Gateway with
``ceph-deploy``, notice that the configuration file only has one setting
``rgw_frontends`` (and that's assuming you elected to change the default port).
The ``ceph-deploy`` utility generates the data directory and the keyring for
you--placing the keyring in ``/var/lib/ceph/radosgw/{rgw-intance}``. The daemon
you--placing the keyring in ``/var/lib/ceph/radosgw/{rgw-instance}``. The daemon
looks in default locations, whereas you may have specified different settings
in your Ceph configuration file. Since you already have keys and a data
directory, you will want to maintain those paths in your Ceph configuration
Expand Down
2 changes: 1 addition & 1 deletion doc/man/8/ceph.rst
Expand Up @@ -1048,7 +1048,7 @@ Usage::

ceph osd pool application enable <pool-name> <app> {--yes-i-really-mean-it}

Subcommand ``get`` displays the value for the given key that is assosciated
Subcommand ``get`` displays the value for the given key that is associated
with the given application of the given pool. Not passing the optional
arguments would display all key-value pairs for all applications for all
pools.
Expand Down
2 changes: 1 addition & 1 deletion doc/rados/api/libradospp.rst
Expand Up @@ -2,7 +2,7 @@
LibradosPP (C++)
==================

.. note:: The librados C++ API is not guarenteed to be API+ABI stable
.. note:: The librados C++ API is not guaranteed to be API+ABI stable
between major releases. All applications using the librados C++ API must
be recompiled and relinked against a specific Ceph release.

Expand Down
6 changes: 3 additions & 3 deletions doc/rados/configuration/bluestore-config-ref.rst
Expand Up @@ -146,7 +146,7 @@ will attempt to keep OSD heap memory usage under a designated target size via
the ``osd_memory_target`` configuration option. This is a best effort
algorithm and caches will not shrink smaller than the amount specified by
``osd_memory_cache_min``. Cache ratios will be chosen based on a hierarchy
of priorities. If priority information is not availabe, the
of priorities. If priority information is not available, the
``bluestore_cache_meta_ratio`` and ``bluestore_cache_kv_ratio`` options are
used as fallbacks.

Expand All @@ -159,7 +159,7 @@ used as fallbacks.

``osd_memory_target``

:Description: When tcmalloc is available and cache autotuning is enabled, try to keep this many bytes mapped in memory. Note: This may not exactly match the RSS memory usage of the process. While the total amount of heap memory mapped by the process should generally stay close to this target, there is no guarantee that the kernel will actually reclaim memory that has been unmapped. During initial developement, it was found that some kernels result in the OSD's RSS Memory exceeding the mapped memory by up to 20%. It is hypothesised however, that the kernel generally may be more aggressive about reclaiming unmapped memory when there is a high amount of memory pressure. Your mileage may vary.
:Description: When tcmalloc is available and cache autotuning is enabled, try to keep this many bytes mapped in memory. Note: This may not exactly match the RSS memory usage of the process. While the total amount of heap memory mapped by the process should generally stay close to this target, there is no guarantee that the kernel will actually reclaim memory that has been unmapped. During initial development, it was found that some kernels result in the OSD's RSS Memory exceeding the mapped memory by up to 20%. It is hypothesised however, that the kernel generally may be more aggressive about reclaiming unmapped memory when there is a high amount of memory pressure. Your mileage may vary.
:Type: Unsigned Integer
:Required: Yes
:Default: ``4294967296``
Expand All @@ -181,7 +181,7 @@ used as fallbacks.
``osd_memory_base``

:Description: When tcmalloc and cache autotuning is enabled, estimate the minimum amount of memory in bytes the OSD will need. This is used to help the autotuner estimate the expected aggregate memory consumption of the caches.
:Type: Unsigned Interger
:Type: Unsigned Integer
:Required: No
:Default: ``805306368``

Expand Down
4 changes: 2 additions & 2 deletions doc/rados/operations/crush-map-edits.rst
Expand Up @@ -568,11 +568,11 @@ There are three types of transformations possible:
since the previous rule distributed across devices of multiple
classes but the adjusted rules will only map to devices of the
specified *device-class*, but that often is an accepted level of
data movement when the nubmer of outlier devices is small.
data movement when the number of outlier devices is small.

#. ``--reclassify-bucket <match-pattern> <device-class> <default-parent>``

This will allow you to merge a parallel type-specific hiearchy with the normal hierarchy. For example, many users have maps like::
This will allow you to merge a parallel type-specific hierarchy with the normal hierarchy. For example, many users have maps like::

host node1 {
id -2 # do not change unnecessarily
Expand Down
2 changes: 1 addition & 1 deletion doc/rados/operations/erasure-code-clay.rst
Expand Up @@ -120,7 +120,7 @@ Where:
``crush-root={root}``

:Description: The name of the crush bucket used for the first step of
the CRUSH rule. For intance **step take default**.
the CRUSH rule. For instance **step take default**.

:Type: String
:Required: No.
Expand Down
2 changes: 1 addition & 1 deletion doc/rados/operations/erasure-code-isa.rst
Expand Up @@ -59,7 +59,7 @@ Where:
``crush-root={root}``

:Description: The name of the crush bucket used for the first step of
the CRUSH rule. For intance **step take default**.
the CRUSH rule. For instance **step take default**.

:Type: String
:Required: No.
Expand Down
2 changes: 1 addition & 1 deletion doc/rados/operations/monitoring.rst
Expand Up @@ -165,7 +165,7 @@ Network Performance Checks
Ceph OSDs send heartbeat ping messages amongst themselves to monitor daemon availability. We
also use the response times to monitor network performance.
While it is possible that a busy OSD could delay a ping response, we can assume
that if a network switch fails mutiple delays will be detected between distinct pairs of OSDs.
that if a network switch fails multiple delays will be detected between distinct pairs of OSDs.

By default we will warn about ping times which exceed 1 second (1000 milliseconds).

Expand Down
2 changes: 1 addition & 1 deletion doc/radosgw/STSLite.rst
Expand Up @@ -212,7 +212,7 @@ Lines 13-16 have been added as a workaround in the code block below:
def __init__(self, credentials, service_name, region_name):
self.credentials = credentials
# We initialize these value here so the unit tests can have
# valid values. But these will get overriden in ``add_auth``
# valid values. But these will get overridden in ``add_auth``
# later for real requests.
self._region_name = region_name
if service_name == 'sts':
Expand Down
2 changes: 1 addition & 1 deletion doc/radosgw/s3/bucketops.rst
Expand Up @@ -578,7 +578,7 @@ Delete a specific, or all, notifications from a bucket.

- Notification deletion is an extension to the S3 notification API
- When the bucket is deleted, any notification defined on it is also deleted
- Deleting an unkown notification (e.g. double delete) is not considered an error
- Deleting an unknown notification (e.g. double delete) is not considered an error

Syntax
~~~~~~
Expand Down
2 changes: 1 addition & 1 deletion doc/rbd/rbd-live-migration.rst
Expand Up @@ -130,7 +130,7 @@ If the `migration_source` image is a parent of one or more clones, the `--force`
option will need to be specified after ensuring all descendent clone images are
not in use.

Commiting the live-migration will remove the cross-links between the source
Committing the live-migration will remove the cross-links between the source
and target images, and will remove the source image::

$ rbd trash list --all
Expand Down
2 changes: 1 addition & 1 deletion doc/releases/releases.yml
Expand Up @@ -2,7 +2,7 @@
# there are two sections
#
# releases: ... for named releases
# developement: ... for dev releases
# development: ... for dev releases
#
# by default a `version` is interpreted as a sphinx reference when rendered (see
# schedule.rst for the existing tags such as `_13.2.2`). If a version should not
Expand Down