Navigation Menu

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

doc: kill some broken links #15203

Merged
merged 1 commit into from Jun 8, 2017
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
2 changes: 1 addition & 1 deletion doc/architecture.rst
Expand Up @@ -1581,7 +1581,7 @@ instance for high availability.
.. _Monitoring OSDs and PGs: ../rados/operations/monitoring-osd-pg
.. _Heartbeats: ../rados/configuration/mon-osd-interaction
.. _Monitoring OSDs: ../rados/operations/monitoring-osd-pg/#monitoring-osds
.. _CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data: http://ceph.com/papers/weil-crush-sc06.pdf
.. _CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data: https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf
.. _Data Scrubbing: ../rados/configuration/osd-config-ref#scrubbing
.. _Report Peering Failure: ../rados/configuration/mon-osd-interaction#osds-report-peering-failure
.. _Troubleshooting Peering Failure: ../rados/troubleshooting/troubleshooting-pg#placement-group-down-peering-failure
Expand Down
2 changes: 1 addition & 1 deletion doc/dev/blkin.rst
Expand Up @@ -16,7 +16,7 @@ in realtime. The LTTng traces can then be visualized with Twitter's
Zipkin_.

.. _Dapper: http://static.googleusercontent.com/media/research.google.com/el//pubs/archive/36356.pdf
.. _Zipkin: http://twitter.github.io/zipkin/
.. _Zipkin: http://zipkin.io/


Installing Blkin
Expand Down
4 changes: 2 additions & 2 deletions doc/dev/development-workflow.rst
Expand Up @@ -55,7 +55,7 @@ Release Cycle


Four times a year, the development roadmap is discussed online during
the `Ceph Developer Summit <http://wiki.ceph.com/Planning/CDS/>`_. A
the `Ceph Developer Summit <http://tracker.ceph.com/projects/ceph/wiki/Planning#Ceph-Developer-Summit>`_. A
new stable release (hammer, infernalis, jewel ...) is published at the same
frequency. Every other release (firefly, hammer, jewel...) is a `Long Term
Stable (LTS) <../../releases>`_. See `Understanding the release cycle
Expand Down Expand Up @@ -126,7 +126,7 @@ Running and interpreting teuthology integration tests
The :doc:`/dev/sepia` runs `teuthology
<https://github.com/ceph/teuthology/>`_ integration tests `on a regular basis <http://tracker.ceph.com/projects/ceph-releases/wiki/HOWTO_monitor_the_automated_tests_AKA_nightlies#Automated-tests-AKA-nightlies>`_ and the
results are posted on `pulpito <http://pulpito.ceph.com/>`_ and the
`ceph-qa mailing list <http://ceph.com/resources/mailing-list-irc/>`_.
`ceph-qa mailing list <https://ceph.com/irc/>`_.

* The job failures are `analyzed by quality engineers and developers
<http://tracker.ceph.com/projects/ceph-releases/wiki/HOWTO_monitor_the_automated_tests_AKA_nightlies#List-of-suites-and-watchers>`_
Expand Down
2 changes: 1 addition & 1 deletion doc/dev/documenting.rst
Expand Up @@ -30,7 +30,7 @@ API`_ provides a complete example. It is pulled into Sphinx by
`librados.rst`_, which is rendered at :doc:`/rados/api/librados`.

.. _`librados C API`: https://github.com/ceph/ceph/blob/master/src/include/rados/librados.h
.. _`librados.rst`: https://raw.github.com/ceph/ceph/master/doc/api/librados.rst
.. _`librados.rst`: https://github.com/ceph/ceph/raw/master/doc/rados/api/librados.rst
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we reference the local copy of librados.rst using relative path?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i will have a try

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we have to keep this URI.

➜  WorkSpace curl -q -I  http://localhost:8080/rados/api/librados/
HTTP/1.0 200 OK
Server: SimpleHTTP/0.6 Python/2.7.12
Date: Mon, 05 Jun 2017 08:34:54 GMT
Content-type: text/html
Content-Length: 377413
Last-Modified: Mon, 05 Jun 2017 08:30:42 GMT

➜  WorkSpace curl -q -I  http://localhost:8080/rados/api/librados.rst
HTTP/1.0 404 File not found
Server: SimpleHTTP/0.6 Python/2.7.12
Date: Mon, 05 Jun 2017 08:34:59 GMT
Connection: close
Content-Type: text/html


Drawing diagrams
================
Expand Down
12 changes: 6 additions & 6 deletions doc/dev/index.rst
Expand Up @@ -135,7 +135,7 @@ in the body of the message.

There are also `other Ceph-related mailing lists`_.

.. _`other Ceph-related mailing lists`: https://ceph.com/resources/mailing-list-irc/
.. _`other Ceph-related mailing lists`: https://ceph.com/irc/

IRC
---
Expand All @@ -145,7 +145,7 @@ time using `Internet Relay Chat`_.

.. _`Internet Relay Chat`: http://www.irchelp.org/

See https://ceph.com/resources/mailing-list-irc/ for how to set up your IRC
See https://ceph.com/irc/ for how to set up your IRC
client and a list of channels.

Submitting patches
Expand Down Expand Up @@ -750,7 +750,7 @@ The results of the nightlies are published at http://pulpito.ceph.com/ and
http://pulpito.ovh.sepia.ceph.com:8081/. The developer nick shows in the
test results URL and in the first column of the Pulpito dashboard. The
results are also reported on the `ceph-qa mailing list
<http://ceph.com/resources/mailing-list-irc/>`_ for analysis.
<https://ceph.com/irc/>`_ for analysis.

Suites inventory
----------------
Expand Down Expand Up @@ -1202,9 +1202,9 @@ Getting ceph-workbench
Since testing in the cloud is done using the `ceph-workbench
ceph-qa-suite`_ tool, you will need to install that first. It is designed
to be installed via Docker, so if you don't have Docker running on your
development machine, take care of that first. The Docker project has a good
tutorial called `Get Started with Docker Engine for Linux
<https://docs.docker.com/linux/>`_ if you unsure how to proceed.
development machine, take care of that first. You can follow `the official
tutorial<https://docs.docker.com/engine/installation/>`_ to install if
you have not installed yet.

Once Docker is up and running, install ``ceph-workbench`` by following the
`Installation instructions in the ceph-workbench documentation
Expand Down
2 changes: 1 addition & 1 deletion doc/dev/osd_internals/erasure_coding/developer_notes.rst
Expand Up @@ -189,7 +189,7 @@ in the registry. The `ErasureCodePluginExample <https://github.com/ceph/ceph/blo

The *ErasureCodePlugin* derived object must provide a factory method
from which the concrete implementation of the *ErasureCodeInterface*
object can be generated. The `ErasureCodePluginExample plugin <https://github.com/ceph/ceph/blob/v0.78/src/test/osd/ErasureCodePluginExample.cc>`_ reads:
object can be generated. The `ErasureCodePluginExample plugin <https://github.com/ceph/ceph/blob/v0.78/src/test/erasure-code/ErasureCodePluginExample.cc>`_ reads:

::

Expand Down
11 changes: 5 additions & 6 deletions doc/dev/placement-group.rst
Expand Up @@ -45,12 +45,11 @@ is the primary and the rest are replicas.
Many PGs can map to one OSD.

A PG represents nothing but a grouping of objects; you configure the
number of PGs you want (see
http://ceph.com/wiki/Changing_the_number_of_PGs ), number of
OSDs * 100 is a good starting point, and all of your stored objects
are pseudo-randomly evenly distributed to the PGs. So a PG explicitly
does NOT represent a fixed amount of storage; it represents 1/pg_num
'th of the storage you happen to have on your OSDs.
number of PGs you want, number of OSDs * 100 is a good starting point
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it intentional to drop the link?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i searched it, but i can't find it in our wiki. I think it's ok to remove this link

, and all of your stored objects are pseudo-randomly evenly distributed
to the PGs. So a PG explicitly does NOT represent a fixed amount of
storage; it represents 1/pg_num'th of the storage you happen to have
on your OSDs.

Ignoring the finer points of CRUSH and custom placement, it goes
something like this in pseudocode::
Expand Down
2 changes: 1 addition & 1 deletion doc/rados/operations/crush-map.rst
Expand Up @@ -1257,4 +1257,4 @@ Further, as noted above, be careful running old versions of the
``ceph-osd`` daemon after reverting to legacy values as the feature
bit is not perfectly enforced.

.. _CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data: http://ceph.com/papers/weil-crush-sc06.pdf
.. _CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data: https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf
2 changes: 1 addition & 1 deletion doc/rados/operations/erasure-code-jerasure.rst
Expand Up @@ -6,7 +6,7 @@ The *jerasure* plugin is the most generic and flexible plugin, it is
also the default for Ceph erasure coded pools.

The *jerasure* plugin encapsulates the `Jerasure
<https://bitbucket.org/jimplank/jerasure/>`_ library. It is
<http://jerasure.org>`_ library. It is
recommended to read the *jerasure* documentation to get a better
understanding of the parameters.

Expand Down
2 changes: 1 addition & 1 deletion doc/rados/troubleshooting/memory-profiling.rst
Expand Up @@ -139,4 +139,4 @@ For example::
ceph tell osd.0 heap stop_profiler

.. _Logging and Debugging: ../log-and-debug
.. _Google Heap Profiler: http://google-perftools.googlecode.com/svn/trunk/doc/heapprofile.html
.. _Google Heap Profiler: http://goog-perftools.sourceforge.net/doc/heap_profiler.html
4 changes: 2 additions & 2 deletions doc/start/get-involved.rst
Expand Up @@ -12,11 +12,11 @@ These are exciting times in the Ceph community! Get involved!
| **Blog** | Check the Ceph Blog_ periodically to keep track | http://ceph.com/community/blog/ |
| | of Ceph progress and important announcements. | |
+----------------------+-------------------------------------------------+-----------------------------------------------+
| **Planet Ceph** | Check the blog aggregation on Planet Ceph for | http://ceph.com/community/planet-ceph/ |
| **Planet Ceph** | Check the blog aggregation on Planet Ceph for | https://ceph.com/category/planet/ |
| | interesting stories, information and | |
| | experiences from the community. | |
+----------------------+-------------------------------------------------+-----------------------------------------------+
| **Wiki** | Check the Ceph Wiki is a source for more | https://wiki.ceph.com/ |
| **Wiki** | Check the Ceph Wiki is a source for more | http://wiki.ceph.com/ |
| | community and development related topics. You | |
| | can find there information about blueprints, | |
| | meetups, the Ceph Developer Summits and more. | |
Expand Down