Skip to content

Commit

Permalink
docs: fix broken external links
Browse files Browse the repository at this point in the history
  • Loading branch information
rickstaa committed Feb 7, 2022
1 parent 44f8679 commit 51317c0
Show file tree
Hide file tree
Showing 14 changed files with 33 additions and 63 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -67,8 +67,8 @@ def get_lr_scheduler(optimizer, decaying_lr_type, lr_start, lr_final, steps):
:obj:`torch.optim.lr_scheduler`: A learning rate scheduler object.
.. seealso::
See the `pytorch <https://pytorch.org/docs/stable/optim.html>`_ documentation on
how to implement other decay options.
See the :torch:`pytorch <docs/stable/optim.html>` documentation on how to
implement other decay options.
""" # noqa: E501
if decaying_lr_type.lower() != "constant" and lr_start != lr_final:
if decaying_lr_type.lower() == "exponential":
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,8 @@ def get_lr_scheduler(decaying_lr_type, lr_start, lr_final, steps):
scheduler object.
.. seealso::
See the `tensorflow <https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/schedules>`_
documentation on how to implement other decay options.
See the :tf:`tensorflow <keras/optimizers/schedules>` documentation on how to
implement other decay options.
""" # noqa: E501
if decaying_lr_type.lower() != "constant" and lr_start != lr_final:
if decaying_lr_type.lower() == "exponential":
Expand Down
9 changes: 4 additions & 5 deletions bayesian_learning_control/control/algos/tf2/lac/lac.py
Original file line number Diff line number Diff line change
Expand Up @@ -477,11 +477,10 @@ def save(self, path, checkpoint_name="checkpoint"):
.. note::
This function saved the model weights using the
:meth:`tf.keras.Model.save_weights` method
(see https://www.tensorflow.org/api_docs/python/tf/keras/Model#save_weights)
. The model should therefore be restored using the
:meth:`tf.keras.Model.load_weights` method (see
https://www.tensorflow.org/api_docs/python/tf/keras/Model#load_weights). If
:meth:`tf.keras.Model.save_weights` method (see
:tf:`keras/Model#save_weights`). The model should therefore be restored
using the :meth:`tf.keras.Model.load_weights` method (see
:tf:`keras/Model#load_weights`). If
you want to deploy the full model use the :meth:`.export` method instead.
"""
path = Path(path)
Expand Down
9 changes: 4 additions & 5 deletions bayesian_learning_control/control/algos/tf2/sac/sac.py
Original file line number Diff line number Diff line change
Expand Up @@ -448,11 +448,10 @@ def save(self, path, checkpoint_name="checkpoint"):
.. note::
This function saved the model weights using the
:meth:`tf.keras.Model.save_weights` method
(see https://www.tensorflow.org/api_docs/python/tf/keras/Model#save_weights)
. The model should therefore be restored using the
:meth:`tf.keras.Model.load_weights` method (see
https://www.tensorflow.org/api_docs/python/tf/keras/Model#load_weights). If
:meth:`tf.keras.Model.save_weights` method (see
:tf:`keras/Model#save_weights`). The model should therefore be restored
using the :meth:`tf.keras.Model.load_weights` method (see
:tf:`keras/Model#load_weights`). If
you want to deploy the full model use the :meth:`.export` method instead.
"""
path = Path(path)
Expand Down
3 changes: 1 addition & 2 deletions bayesian_learning_control/utils/mpi_utils/mpi_tf2.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,10 +50,9 @@ class MpiAdamOptimizer(object):
The compute_gradients method is taken from Baselines `MpiAdamOptimizer`_.
For documentation on method arguments, see the Tensorflow docs page for
the base `AdamOptimizer`_.
the base :tf:`AdamOptimizer <train/AdamOptimizer>`.
.. _`MpiAdamOptimizer`: https://github.com/openai/baselines/blob/master/baselines/common/mpi_adam_optimizer.py
.. _`AdamOptimizer`: https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer
""" # noqa: E501

def __init__(self, **kwargs):
Expand Down
1 change: 0 additions & 1 deletion docs/source/control/algorithms.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,6 @@ the following RL agents:

algorithms/sac
algorithms/lac
algorithms/gpl

Imitation Learning Agents
=========================
Expand Down
5 changes: 1 addition & 4 deletions docs/source/control/algorithms/lac.rst
Original file line number Diff line number Diff line change
Expand Up @@ -235,10 +235,7 @@ The PyTorch version of the LAC algorithm is implemented by subclassing the :clas
result the model weights are saved using the 'model_state' dictionary (
:attr:`~bayesian_learning_control.control.algos.pytorch.lac.LAC.state_dict`). This saved weights can be found in
the "torch_save/model_state.pt "file. For an example of how to load a model using this file, see
:ref:`saving_and_loading` or the `PyTorch documentation`_.

.. _`PyTorch documentation`: https://pytorch.org/tutorials/beginner/saving_loading_models.html

:ref:`saving_and_loading` or the :torch:`PyTorch documentation <tutorials/beginner/saving_loading_models.html>`.

Documentation: Tensorflow Version
---------------------------------
Expand Down
4 changes: 1 addition & 3 deletions docs/source/control/algorithms/sac.rst
Original file line number Diff line number Diff line change
Expand Up @@ -69,9 +69,7 @@ The PyTorch version of the SAC algorithm is implemented by subclassing the :clas
result, the model weights are saved using the 'model_state' dictionary (
:attr:`~bayesian_learning_control.control.algos.pytorch.sac.SAC.state_dict`). This saved weights can be found in
the ``torch_save/model_state.pt`` file. For an example of how to load a model using this file, see
:ref:`saving_and_loading` or the `PyTorch documentation`_.

.. _`PyTorch documentation`: https://pytorch.org/tutorials/beginner/saving_loading_models.html
:ref:`saving_and_loading` or the :torch:`PyTorch documentation <tutorials/beginner/saving_loading_models.html>`.

Documentation: Tensorflow Version
---------------------------------
Expand Down
9 changes: 3 additions & 6 deletions docs/source/control/saving_and_loading.rst
Original file line number Diff line number Diff line change
Expand Up @@ -169,7 +169,7 @@ Load Pytorch Policy
~~~~~~~~~~~~~~~~~~~

Pytorch Policies can be loaded using the :obj:`torch.load` method. For more information on how to load PyTorch models see
the `PyTorch documentation`_.
the :torch:`PyTorch documentation <tutorials/beginner/saving_loading_models.html>`.

.. code-block:: python
:linenos:
Expand Down Expand Up @@ -216,9 +216,7 @@ In this example, observe that
Additionally, each algorithm also contains a :obj:`~bayesian_learning_control.control.algos.pytorch.lac.LAC.restore` method which serves as a
wrapper around the :obj:`torch.load` and :obj:`torch.nn.Module.load_state_dict` methods.

.. _`Pytorch Documentation`: https://pytorch.org/tutorials/beginner/saving_loading_models.html

Load Tensorflow Policy
Load Tensorflow Policy
~~~~~~~~~~~~~~~~~~~~~~

.. code-block:: python
Expand Down Expand Up @@ -273,7 +271,7 @@ As stated above, the Tensorflow version of the algorithm also saves the full mod
with `TFLite`_, `TensorFlow.js`_, `TensorFlow Serving`_, or `TensorFlow Hub`_. For more information, see :ref:`the hardware deployment documentation <hardware>`.

.. important::
TensorFlow also PyTorch multiple ways to deploy trained models to hardware (see the `PyTorch serving documentation`_). However, at the time of writing,
TensorFlow also PyTorch multiple ways to deploy trained models to hardware (see the :torch:`PyTorch serving documentation <blog/model-serving-in-pyorch/>`). However, at the time of writing,
these methods currently do not support the agents used in the BLC package. For more information, see
`this issue <https://github.com/pytorch/pytorch/issues/29843>`_.

Expand All @@ -283,4 +281,3 @@ with `TFLite`_, `TensorFlow.js`_, `TensorFlow Serving`_, or `TensorFlow Hub`_. F
.. _`TensorFlow Serving`: https://www.tensorflow.org/tfx/tutorials/serving/rest_simple
.. _`TensorFlow Hub`: https://www.tensorflow.org/hub
.. _`SavedModel format`: https://www.tensorflow.org/guide/saved_model
.. _`PyTorch serving documentation`: https://pytorch.org/blog/model-serving-in-pyorch/
25 changes: 9 additions & 16 deletions docs/source/dev/doc_dev.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,11 @@ Release documentation

.. contents:: Table of Contents

The BLC framework contains two `Github actions`_ that automatically check and
The BLC framework contains two :blc:`Github actions <actions>` that automatically check and
deploy new documentation:

* The `docs_check_ci`_ action checks your changes to see if the documentation still builds.
* The `docs_publish_ci`_ action deploys your documentation if a new version of the BLC framework is released.
* The :blc:`docs_check_ci <blob/main/.github/workflows/docs_check_ci.yml>` action checks your changes to see if the documentation still builds.
* The :blc:`docs_publish_ci <blob/main/.github/workflows/docs_publish_ci.yml>` action deploys your documentation if a new version of the BLC framework is released.

Automatic build instructions
============================
Expand All @@ -17,14 +17,10 @@ To successfully deploy your new documentation, you have to follow the following

#. Create a new branch for the changes you want to make to the documentation (e.g. ``docs_change`` branch).
#. Make your changes to this branch.
#. Commit your changes. This will trigger the `docs_check_ci`_ action to run.
#. Commit your changes. This will trigger the :blc:`docs_check_ci <blob/main/.github/workflows/docs_check_ci.yml>` action to run.
#. Create a pull request into the main branch if this action ran without errors.
#. Add a version bump label (``bump:patch``, ``bump:minor`` or ``bump:major``) to the pull request.
#. Merge the pull request into the main branch. The documentation will now be deployed using the `docs_publish_ci`_ action.

.. _`Github actions`: https://github.com/features/actions
.. _`docs_check_ci`: https://github.com/rickstaa/bayesian-learning-control/blob/main/.github/workflows/docs_check_ci.yml
.. _`docs_publish_ci`: https://github.com/rickstaa/bayesian-learning-control/blob/main/.github/workflows/docs_publish_ci.yml
#. Merge the pull request into the main branch. The documentation will now be deployed using the :blc:`docs_publish_ci <blob/main/.github/workflows/docs_publish_ci.yml>` action.

.. tip::

Expand Down Expand Up @@ -59,7 +55,7 @@ Build the documentation
Build HTML documentation
~~~~~~~~~~~~~~~~~~~~~~~~

To build the `HTML`_ documentation, go into the `docs/`_ directory and run the
To build the `HTML`_ documentation, go into the :blc:`docs/ <tree/main/docs>` directory and run the
``make html`` command. This command will generate the html documentation
inside the ``docs/build/html`` directory.

Expand All @@ -72,7 +68,7 @@ inside the ``docs/build/html`` directory.
Build LATEX documentation
~~~~~~~~~~~~~~~~~~~~~~~~~

To build the `LATEX`_ documentation, go into the `docs/`_ directory and run the
To build the `LATEX`_ documentation, go into the :blc:`docs/ <tree/main/docs>` directory and run the
``make latex`` command. This command will generate the html documentation
inside the ``docs/build/latex`` directory.

Expand All @@ -82,13 +78,10 @@ Deploying
---------

To deploy documentation to the Github Pages site for the repository,
push the documentation to the `main`_ branch and run the
``make gh-pages`` command inside the `docs/`_ directory.
push the documentation to the :blc:`main <tree/main>` branch and run the
``make gh-pages`` command inside the :blc:`docs/ <tree/main/docs>` directory.

.. warning::

Please make sure you are on the `main`_ branch while building the documentation. Otherwise,
errors will greet you.

.. _`docs/`: https://github.com/rickstaa/bayesian-learning-control/tree/main/docs
.. _`main`: https://github.com/rickstaa/bayesian-learning-control/tree/main
9 changes: 2 additions & 7 deletions docs/source/dev/release_dev.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,17 +22,14 @@ Markdown guidelines:
.. _`remark-lint`: https://github.com/remarkjs/remark-lint

.. note::
The BLC framework contains several `GitHub actions`_, which check code changes
The BLC framework contains several :blc:`GitHub actions <actions>`, which check code changes
against these coding guidelines. As a result, when the above guidelines are not met, you will
receive an error/warning when you create a pull request. Some of these actions will create pull requests
which you can use to fix some of these violations. For other errors/warning, you are expected to handle
them yourself before merging them into the master branch. If you think a code guideline is not correct
or your code structure doesn't allow you to respect the guideline, please state so in the
pull request.

.. _`Github Actions`: https://github.com/rickstaa/bayesian-learning-control/actions


Pre-commit hooks
----------------

Expand All @@ -55,13 +52,11 @@ Before releasing the package, make sure the following steps are performed:
#. Create a new branch on which you implement your changes.
#. Commit your changes.
#. Create a pull request to pull the changes of your development branch onto the master branch.
#. Make sure that all the `pull request checks`_ were successful.
#. Make sure that all the :blc:`pull request checks <actions>` were successful.
#. Add a version label to (``bump:patch``, ``bump:minor`` or ``bump:major``) to the pull request.
#. Squash and merge your branch with the main branch.
#. Create a release using the GitHub draft release tool.

.. _`pull request checks`: https://github.com/rickstaa/bayesian-learning-control/actions

Commit guidelines
-----------------

Expand Down
2 changes: 1 addition & 1 deletion docs/source/hardware/hardware.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Deploy PyTorch Algorithms
=========================

.. attention::
PyTorch also provides multiple ways to deploy trained models to hardware (see the `PyTorch serving documentation`_).
PyTorch also provides multiple ways to deploy trained models to hardware (see the :torch:`PyTorch serving documentation <blog/model-serving-in-pyorch>`).
However, at the time of writing, these methods currently do not support the agents used in the BLC package.
For more information, see `this issue <https://github.com/pytorch/pytorch/issues/29843>`_.

Expand Down
4 changes: 1 addition & 3 deletions docs/source/user/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -93,12 +93,10 @@ this environment. The BLC has two versions you can install:

We choose PyTorch as the default backend as it, in our opinion, is easier to work with than Tensorflow. However, at the time of writing, it
is slightly slower than the Tensorflow backend. This is caused because the agents used in the BLC package use components that are
not yet supported by `TorchScript`_ (responsible for creating a fast compiled version of PyTorch script). As PyTorch has shown to be faster
not yet supported by :torch:`TorchScript <docs/stable/jit.html>` (responsible for creating a fast compiled version of PyTorch script). As PyTorch has shown to be faster
in most implementations, this will likely change in the future. You can track the status of this speed problem
`here <https://github.com/pytorch/pytorch/issues/29843>`_.

.. _`TorchScript`: https://pytorch.org/docs/stable/jit.html

Install the Pytorch version
---------------------------

Expand Down
8 changes: 2 additions & 6 deletions docs/source/utils/loggers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -53,11 +53,9 @@ The internal state is wiped clean after the call to :meth:`~bayesian_learning_co
(to prevent leakage into the statistics at the next epoch). Finally, :meth:`~bayesian_learning_control.utils.log_utils.logx.EpochLogger.dump_tabular`
is called to write the diagnostics to file, Tensorboard and/or stdout.

Next, let's use the `Pytorch Classifier`_ tutorial to look at a full training procedure with the logger embedded, to highlight configuration and model
Next, let's use the :torch:`Pytorch Classifier <tutorials/beginner/blitz/cifar10_tutorial.html>` tutorial to look at a full training procedure with the logger embedded, to highlight configuration and model
saving as well as diagnostic logging:

.. _`Pytorch Classifier`: https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html

.. code-block:: python
:linenos:
:emphasize-lines: 13, 52-53, 81-88, 96-98, 108, 138-141, 142, 148, 151-158, 160
Expand Down Expand Up @@ -252,14 +250,12 @@ Logging and Tensorflow
The preceding example was given in Pytorch. For Tensorflow, everything is the same except for L42-43:
instead of :meth:`~bayesian_learning_control.utils.log_utils.logx.EpochLogger.setup_pytorch_saver`, you would
use :meth:`~bayesian_learning_control.utils.log_utils.logx.EpochLogger.setup_tf_saver` and you would pass it
`a Tensorflow Module`_ (the algorithm you are training) as an argument.
:tf:`a Tensorflow Module <nn>` (the algorithm you are training) as an argument.

The behavior of :meth:`~bayesian_learning_control.utils.log_utils.logx.EpochLogger.save_state` is the same as in the
PyTorch case: each time it is called,
it'll save the latest version of the Tensorflow module.

.. _`a Tensorflow module`: https://www.tensorflow.org/api_docs/python/tf/nn

Logging and MPI
---------------

Expand Down

0 comments on commit 51317c0

Please sign in to comment.