diff --git a/doc/architecture.rst b/doc/architecture.rst
index 20b335070319b..f9bdfa2823aba 100644
--- a/doc/architecture.rst
+++ b/doc/architecture.rst
@@ -1581,7 +1581,7 @@ instance for high availability.
.. _Monitoring OSDs and PGs: ../rados/operations/monitoring-osd-pg
.. _Heartbeats: ../rados/configuration/mon-osd-interaction
.. _Monitoring OSDs: ../rados/operations/monitoring-osd-pg/#monitoring-osds
-.. _CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data: http://ceph.com/papers/weil-crush-sc06.pdf
+.. _CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data: https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf
.. _Data Scrubbing: ../rados/configuration/osd-config-ref#scrubbing
.. _Report Peering Failure: ../rados/configuration/mon-osd-interaction#osds-report-peering-failure
.. _Troubleshooting Peering Failure: ../rados/troubleshooting/troubleshooting-pg#placement-group-down-peering-failure
diff --git a/doc/dev/blkin.rst b/doc/dev/blkin.rst
index 9427202496cf1..8e0320fe15f34 100644
--- a/doc/dev/blkin.rst
+++ b/doc/dev/blkin.rst
@@ -16,7 +16,7 @@ in realtime. The LTTng traces can then be visualized with Twitter's
Zipkin_.
.. _Dapper: http://static.googleusercontent.com/media/research.google.com/el//pubs/archive/36356.pdf
-.. _Zipkin: http://twitter.github.io/zipkin/
+.. _Zipkin: http://zipkin.io/
Installing Blkin
diff --git a/doc/dev/development-workflow.rst b/doc/dev/development-workflow.rst
index 9374d8b69329b..9561899bf94f4 100644
--- a/doc/dev/development-workflow.rst
+++ b/doc/dev/development-workflow.rst
@@ -55,7 +55,7 @@ Release Cycle
Four times a year, the development roadmap is discussed online during
-the `Ceph Developer Summit `_. A
+the `Ceph Developer Summit `_. A
new stable release (hammer, infernalis, jewel ...) is published at the same
frequency. Every other release (firefly, hammer, jewel...) is a `Long Term
Stable (LTS) <../../releases>`_. See `Understanding the release cycle
@@ -126,7 +126,7 @@ Running and interpreting teuthology integration tests
The :doc:`/dev/sepia` runs `teuthology
`_ integration tests `on a regular basis `_ and the
results are posted on `pulpito `_ and the
-`ceph-qa mailing list `_.
+`ceph-qa mailing list `_.
* The job failures are `analyzed by quality engineers and developers
`_
diff --git a/doc/dev/documenting.rst b/doc/dev/documenting.rst
index afd6efa952858..602f3c769c454 100644
--- a/doc/dev/documenting.rst
+++ b/doc/dev/documenting.rst
@@ -30,7 +30,7 @@ API`_ provides a complete example. It is pulled into Sphinx by
`librados.rst`_, which is rendered at :doc:`/rados/api/librados`.
.. _`librados C API`: https://github.com/ceph/ceph/blob/master/src/include/rados/librados.h
-.. _`librados.rst`: https://raw.github.com/ceph/ceph/master/doc/api/librados.rst
+.. _`librados.rst`: https://github.com/ceph/ceph/raw/master/doc/rados/api/librados.rst
Drawing diagrams
================
diff --git a/doc/dev/index.rst b/doc/dev/index.rst
index 2cb002dc8df23..2c207d51fe08b 100644
--- a/doc/dev/index.rst
+++ b/doc/dev/index.rst
@@ -135,7 +135,7 @@ in the body of the message.
There are also `other Ceph-related mailing lists`_.
-.. _`other Ceph-related mailing lists`: https://ceph.com/resources/mailing-list-irc/
+.. _`other Ceph-related mailing lists`: https://ceph.com/irc/
IRC
---
@@ -145,7 +145,7 @@ time using `Internet Relay Chat`_.
.. _`Internet Relay Chat`: http://www.irchelp.org/
-See https://ceph.com/resources/mailing-list-irc/ for how to set up your IRC
+See https://ceph.com/irc/ for how to set up your IRC
client and a list of channels.
Submitting patches
@@ -750,7 +750,7 @@ The results of the nightlies are published at http://pulpito.ceph.com/ and
http://pulpito.ovh.sepia.ceph.com:8081/. The developer nick shows in the
test results URL and in the first column of the Pulpito dashboard. The
results are also reported on the `ceph-qa mailing list
-`_ for analysis.
+`_ for analysis.
Suites inventory
----------------
@@ -1202,9 +1202,9 @@ Getting ceph-workbench
Since testing in the cloud is done using the `ceph-workbench
ceph-qa-suite`_ tool, you will need to install that first. It is designed
to be installed via Docker, so if you don't have Docker running on your
-development machine, take care of that first. The Docker project has a good
-tutorial called `Get Started with Docker Engine for Linux
-`_ if you unsure how to proceed.
+development machine, take care of that first. You can follow `the official
+tutorial`_ to install if
+you have not installed yet.
Once Docker is up and running, install ``ceph-workbench`` by following the
`Installation instructions in the ceph-workbench documentation
diff --git a/doc/dev/osd_internals/erasure_coding/developer_notes.rst b/doc/dev/osd_internals/erasure_coding/developer_notes.rst
index cdab34ce41eeb..a9ef9b55c61f0 100644
--- a/doc/dev/osd_internals/erasure_coding/developer_notes.rst
+++ b/doc/dev/osd_internals/erasure_coding/developer_notes.rst
@@ -189,7 +189,7 @@ in the registry. The `ErasureCodePluginExample `_ reads:
+object can be generated. The `ErasureCodePluginExample plugin `_ reads:
::
diff --git a/doc/dev/placement-group.rst b/doc/dev/placement-group.rst
index a544e99d9dfd2..3c067ea3fe6f3 100644
--- a/doc/dev/placement-group.rst
+++ b/doc/dev/placement-group.rst
@@ -45,12 +45,11 @@ is the primary and the rest are replicas.
Many PGs can map to one OSD.
A PG represents nothing but a grouping of objects; you configure the
-number of PGs you want (see
-http://ceph.com/wiki/Changing_the_number_of_PGs ), number of
-OSDs * 100 is a good starting point, and all of your stored objects
-are pseudo-randomly evenly distributed to the PGs. So a PG explicitly
-does NOT represent a fixed amount of storage; it represents 1/pg_num
-'th of the storage you happen to have on your OSDs.
+number of PGs you want, number of OSDs * 100 is a good starting point
+, and all of your stored objects are pseudo-randomly evenly distributed
+to the PGs. So a PG explicitly does NOT represent a fixed amount of
+storage; it represents 1/pg_num'th of the storage you happen to have
+on your OSDs.
Ignoring the finer points of CRUSH and custom placement, it goes
something like this in pseudocode::
diff --git a/doc/rados/operations/crush-map.rst b/doc/rados/operations/crush-map.rst
index e28627b113e8e..c75887ca8ed08 100644
--- a/doc/rados/operations/crush-map.rst
+++ b/doc/rados/operations/crush-map.rst
@@ -1257,4 +1257,4 @@ Further, as noted above, be careful running old versions of the
``ceph-osd`` daemon after reverting to legacy values as the feature
bit is not perfectly enforced.
-.. _CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data: http://ceph.com/papers/weil-crush-sc06.pdf
+.. _CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data: https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf
diff --git a/doc/rados/operations/erasure-code-jerasure.rst b/doc/rados/operations/erasure-code-jerasure.rst
index bd2917ed137a5..b0e6020cdf27a 100644
--- a/doc/rados/operations/erasure-code-jerasure.rst
+++ b/doc/rados/operations/erasure-code-jerasure.rst
@@ -6,7 +6,7 @@ The *jerasure* plugin is the most generic and flexible plugin, it is
also the default for Ceph erasure coded pools.
The *jerasure* plugin encapsulates the `Jerasure
-`_ library. It is
+`_ library. It is
recommended to read the *jerasure* documentation to get a better
understanding of the parameters.
diff --git a/doc/rados/troubleshooting/memory-profiling.rst b/doc/rados/troubleshooting/memory-profiling.rst
index 5322e7b22873c..e2396e2fd3f23 100644
--- a/doc/rados/troubleshooting/memory-profiling.rst
+++ b/doc/rados/troubleshooting/memory-profiling.rst
@@ -139,4 +139,4 @@ For example::
ceph tell osd.0 heap stop_profiler
.. _Logging and Debugging: ../log-and-debug
-.. _Google Heap Profiler: http://google-perftools.googlecode.com/svn/trunk/doc/heapprofile.html
+.. _Google Heap Profiler: http://goog-perftools.sourceforge.net/doc/heap_profiler.html
diff --git a/doc/start/get-involved.rst b/doc/start/get-involved.rst
index cfe3f4d64ddeb..5cbb1d6dea747 100644
--- a/doc/start/get-involved.rst
+++ b/doc/start/get-involved.rst
@@ -12,11 +12,11 @@ These are exciting times in the Ceph community! Get involved!
| **Blog** | Check the Ceph Blog_ periodically to keep track | http://ceph.com/community/blog/ |
| | of Ceph progress and important announcements. | |
+----------------------+-------------------------------------------------+-----------------------------------------------+
-| **Planet Ceph** | Check the blog aggregation on Planet Ceph for | http://ceph.com/community/planet-ceph/ |
+| **Planet Ceph** | Check the blog aggregation on Planet Ceph for | https://ceph.com/category/planet/ |
| | interesting stories, information and | |
| | experiences from the community. | |
+----------------------+-------------------------------------------------+-----------------------------------------------+
-| **Wiki** | Check the Ceph Wiki is a source for more | https://wiki.ceph.com/ |
+| **Wiki** | Check the Ceph Wiki is a source for more | http://wiki.ceph.com/ |
| | community and development related topics. You | |
| | can find there information about blueprints, | |
| | meetups, the Ceph Developer Summits and more. | |