Skip to content

Commit

Permalink
docs: enable Sphinx linter & fixing (#19515)
Browse files Browse the repository at this point in the history
* docs: enable Sphinx linter
* fixes
  • Loading branch information
Borda committed Feb 26, 2024
1 parent e43820a commit cf3553c
Show file tree
Hide file tree
Showing 15 changed files with 55 additions and 50 deletions.
9 changes: 7 additions & 2 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,11 @@ repos:
additional_dependencies: [tomli]
args: ["--in-place"]

- repo: https://github.com/sphinx-contrib/sphinx-lint
rev: v0.9.1
hooks:
- id: sphinx-lint

- repo: https://github.com/asottile/yesqa
rev: v1.5.0
hooks:
Expand All @@ -86,10 +91,10 @@ repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: "v0.2.0"
hooks:
- id: ruff
args: ["--fix", "--preview"]
- id: ruff-format
args: ["--preview"]
- id: ruff
args: ["--fix", "--preview"]

- repo: https://github.com/executablebooks/mdformat
rev: 0.7.17
Expand Down
14 changes: 7 additions & 7 deletions docs/source-app/core_api/lightning_app/communication_content.rst
Original file line number Diff line number Diff line change
Expand Up @@ -87,13 +87,13 @@ And here's the output you get when running the App using the **Lightning CLI**:

.. code-block:: console
INFO: Your app has started. View it in your browser: http://127.0.0.1:7501/view
State: {'works': {'w': {'vars': {'counter': 1}}}}
State: {'works': {'w': {'vars': {'counter': 2}}}}
State: {'works': {'w': {'vars': {'counter': 3}}}}
State: {'works': {'w': {'vars': {'counter': 3}}}}
State: {'works': {'w': {'vars': {'counter': 4}}}}
...
INFO: Your app has started. View it in your browser: http://127.0.0.1:7501/view
State: {'works': {'w': {'vars': {'counter': 1}}}}
State: {'works': {'w': {'vars': {'counter': 2}}}}
State: {'works': {'w': {'vars': {'counter': 3}}}}
State: {'works': {'w': {'vars': {'counter': 3}}}}
State: {'works': {'w': {'vars': {'counter': 4}}}}
...
----

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,9 +41,9 @@ There are a couple of ways you can add a dynamic Work:
def run(self):
if not hasattr(self, "work"):
# The `Work` component is created and attached here.
# The `Work` component is created and attached here.
setattr(self, "work", Work())
# Run the `Work` component.
# Run the `Work` component.
getattr(self, "work").run()
**OPTION 2:** Use the built-in Lightning classes :class:`~lightning.app.structures.Dict` or :class:`~lightning.app.structures.List`
Expand All @@ -60,7 +60,7 @@ There are a couple of ways you can add a dynamic Work:
def run(self):
if "work" not in self.dict:
# The `Work` component is attached here.
# The `Work` component is attached here.
self.dict["work"] = Work()
self.dict["work"].run()
Expand Down
2 changes: 1 addition & 1 deletion docs/source-app/glossary/environment_variables.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,4 +24,4 @@ Environment variables are available in all Flows and Works, and can be accessed
print(os.environ["BAZ"]) # FAZ
.. note::
Environment variables are not encrypted. For sensitive values, we recommend using :ref:`Encrypted Secrets <secrets>`.
Environment variables are not encrypted. For sensitive values, we recommend using :ref:`Encrypted Secrets <secrets>`.
2 changes: 1 addition & 1 deletion docs/source-app/glossary/secrets.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Encrypted Secrets allow you to pass private data to your apps, like API keys, ac
Secrets provide you with a secure way to store this data in a way that is accessible to Apps so that they can authenticate third-party services/solutions.

.. tip::
For non-sensitive configuration values, we recommend using :ref:`plain-text Environment Variables <environment_variables>`.
For non-sensitive configuration values, we recommend using :ref:`plain-text Environment Variables <environment_variables>`.

************
Add a secret
Expand Down
2 changes: 1 addition & 1 deletion docs/source-app/glossary/sharing_components.rst
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ Now, imagine you have implemented a **KerasScriptRunner** component for training

Here are the best practices steps before sharing the component:

* **Testing**: Ensure your component is well tested by following the ref:`../testing` guide.
* **Testing**: Ensure your component is well tested by following the :doc:`../testing` guide.
* **Documented**: Ensure your component has a docstring and comes with some usage explications.

.. Note:: As a Lightning user, it helps to implement your components thinking someone else is going to use them.
Expand Down
14 changes: 7 additions & 7 deletions docs/source-app/workflows/access_app_state.rst
Original file line number Diff line number Diff line change
Expand Up @@ -50,10 +50,10 @@ And here's the output you get when running the App using **Lightning CLI**:

.. code-block:: console
INFO: Your app has started. View it in your browser: http://127.0.0.1:7501/view
State: {'works': {'w': {'vars': {'counter': 1}}}}
State: {'works': {'w': {'vars': {'counter': 2}}}}
State: {'works': {'w': {'vars': {'counter': 3}}}}
State: {'works': {'w': {'vars': {'counter': 3}}}}
State: {'works': {'w': {'vars': {'counter': 4}}}}
...
INFO: Your app has started. View it in your browser: http://127.0.0.1:7501/view
State: {'works': {'w': {'vars': {'counter': 1}}}}
State: {'works': {'w': {'vars': {'counter': 2}}}}
State: {'works': {'w': {'vars': {'counter': 3}}}}
State: {'works': {'w': {'vars': {'counter': 3}}}}
State: {'works': {'w': {'vars': {'counter': 4}}}}
...
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ Update React <-- Lightning app
******************************
To change the React app from the Lightning app, use the values from the `lightningState`.

In this example, when the `react_ui.counter`` increaes in the Lightning app:
In this example, when the ``react_ui.counter`` increaes in the Lightning app:

.. literalinclude:: ../../../../../src/lightning/app/cli/react-ui-template/example_app.py
:emphasize-lines: 18, 24
Expand Down
4 changes: 2 additions & 2 deletions docs/source-pytorch/advanced/post_training_quantization.rst
Original file line number Diff line number Diff line change
Expand Up @@ -55,8 +55,8 @@ Usage

Minor code changes are required for the user to get started with Intel® Neural Compressor quantization API. To construct the quantization process, users can specify the below settings via the Python code:

1. Calibration Dataloader (Needed for post-training static quantization)
2. Evaluation Dataloader and Metric
1. Calibration Dataloader (Needed for post-training static quantization)
2. Evaluation Dataloader and Metric

The code changes that are required for Intel® Neural Compressor are highlighted with comments in the line above.

Expand Down
4 changes: 2 additions & 2 deletions docs/source-pytorch/data/alternatives.rst
Original file line number Diff line number Diff line change
Expand Up @@ -32,8 +32,8 @@ As datasets grow in size and the number of nodes scales, loading training data c
The `StreamingDataset <https://github.com/mosaicml/streaming>`__ can make training on large datasets from cloud storage
as fast, cheap, and scalable as possible.

This library uses a custom built class:`~torch.utils.data.IterableDataset`. The library recommends iterating through it
via a regular class:`~torch.utils.data.DataLoader`. This means that support in the ``Trainer`` is seamless:
This library uses a custom built :class:`~torch.utils.data.IterableDataset`. The library recommends iterating through it
via a regular :class:`~torch.utils.data.DataLoader`. This means that support in the ``Trainer`` is seamless:

.. code-block:: python
Expand Down
2 changes: 1 addition & 1 deletion docs/source-pytorch/ecosystem/asr_nlp_tts.rst
Original file line number Diff line number Diff line change
Expand Up @@ -660,7 +660,7 @@ Hydra makes every aspect of the NeMo model, including the PyTorch Lightning Trai
Using State-Of-The-Art Pre-trained TTS Model
--------------------------------------------

Generate speech using models trained on `LJSpeech <https://keithito.com/LJ-Speech-Dataset/>`,
Generate speech using models trained on `LJSpeech <https://keithito.com/LJ-Speech-Dataset/>`_,
around 24 hours of single speaker data.

See this `TTS notebook <https://github.com/NVIDIA/NeMo/blob/v1.0.0b1/tutorials/tts/1_TTS_inference.ipynb>`_
Expand Down
32 changes: 16 additions & 16 deletions docs/source-pytorch/tuning/profiler_basic.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,22 +31,22 @@ Once the **.fit()** function has completed, you'll see an output like this:
FIT Profiler Report
-----------------------------------------------------------------------------------------------
| Action | Mean duration (s) | Total time (s) |
-----------------------------------------------------------------------------------------------
| [LightningModule]BoringModel.prepare_data | 10.0001 | 20.00 |
| run_training_epoch | 6.1558 | 6.1558 |
| run_training_batch | 0.0022506 | 0.015754 |
| [LightningModule]BoringModel.optimizer_step | 0.0017477 | 0.012234 |
| [LightningModule]BoringModel.val_dataloader | 0.00024388 | 0.00024388 |
| on_train_batch_start | 0.00014637 | 0.0010246 |
| [LightningModule]BoringModel.teardown | 2.15e-06 | 2.15e-06 |
| [LightningModule]BoringModel.on_train_start | 1.644e-06 | 1.644e-06 |
| [LightningModule]BoringModel.on_train_end | 1.516e-06 | 1.516e-06 |
| [LightningModule]BoringModel.on_fit_end | 1.426e-06 | 1.426e-06 |
| [LightningModule]BoringModel.setup | 1.403e-06 | 1.403e-06 |
| [LightningModule]BoringModel.on_fit_start | 1.226e-06 | 1.226e-06 |
-----------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------
| Action | Mean duration (s) | Total time (s) |
-------------------------------------------------------------------------------------------
| [LightningModule]BoringModel.prepare_data | 10.0001 | 20.00 |
| run_training_epoch | 6.1558 | 6.1558 |
| run_training_batch | 0.0022506 | 0.015754 |
| [LightningModule]BoringModel.optimizer_step | 0.0017477 | 0.012234 |
| [LightningModule]BoringModel.val_dataloader | 0.00024388 | 0.00024388 |
| on_train_batch_start | 0.00014637 | 0.0010246 |
| [LightningModule]BoringModel.teardown | 2.15e-06 | 2.15e-06 |
| [LightningModule]BoringModel.on_train_start | 1.644e-06 | 1.644e-06 |
| [LightningModule]BoringModel.on_train_end | 1.516e-06 | 1.516e-06 |
| [LightningModule]BoringModel.on_fit_end | 1.426e-06 | 1.426e-06 |
| [LightningModule]BoringModel.setup | 1.403e-06 | 1.403e-06 |
| [LightningModule]BoringModel.on_fit_start | 1.226e-06 | 1.226e-06 |
-------------------------------------------------------------------------------------------
In this report we can see that the slowest function is **prepare_data**. Now you can figure out why data preparation is slowing down your training.

Expand Down
6 changes: 3 additions & 3 deletions docs/source-pytorch/upgrade/sections/1_7_advanced.rst
Original file line number Diff line number Diff line change
Expand Up @@ -103,15 +103,15 @@
- `PR11871`_

* - used ``Trainer.validated_ckpt_path`` attribute
- rely on generic read-only property ``Trainer.ckpt_path`` which is set when checkpoints are loaded via ``Trainer.validate(````ckpt_path=...)``
- rely on generic read-only property ``Trainer.ckpt_path`` which is set when checkpoints are loaded via ``Trainer.validate(ckpt_path=...)``
- `PR11696`_

* - used ``Trainer.tested_ckpt_path`` attribute
- rely on generic read-only property ``Trainer.ckpt_path`` which is set when checkpoints are loaded via ``Trainer.test(````ckpt_path=...)``
- rely on generic read-only property ``Trainer.ckpt_path`` which is set when checkpoints are loaded via ``Trainer.test(ckpt_path=...)``
- `PR11696`_

* - used ``Trainer.predicted_ckpt_path`` attribute
- rely on generic read-only property ``Trainer.ckpt_path``, which is set when checkpoints are loaded via ``Trainer.predict(````ckpt_path=...)``
- rely on generic read-only property ``Trainer.ckpt_path``, which is set when checkpoints are loaded via ``Trainer.predict(ckpt_path=...)``
- `PR11696`_

* - rely on the returned dictionary from ``Callback.on_save_checkpoint``
Expand Down
2 changes: 1 addition & 1 deletion docs/source-pytorch/upgrade/sections/1_9_devel.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@
- use DDP instead
- `PR16386`_ :doc:`DDP <../../accelerators/gpu_expert>`

* - used the pl.plugins.ApexMixedPrecisionPlugin`` plugin
* - used the ``pl.plugins.ApexMixedPrecisionPlugin`` plugin
- use PyTorch native mixed precision
- `PR16039`_

Expand Down
4 changes: 2 additions & 2 deletions docs/source-pytorch/upgrade/sections/1_9_regular.rst
Original file line number Diff line number Diff line change
Expand Up @@ -39,11 +39,11 @@
- `PR16184`_

* - called the ``pl.tuner.auto_gpu_select.pick_single_gpu`` function
- use Trainer’s flag``devices="auto"``
- use Trainer’s flag ``devices="auto"``
- `PR16184`_

* - called the ``pl.tuner.auto_gpu_select.pick_multiple_gpus`` functions
- use Trainer’s flag``devices="auto"``
- use Trainer’s flag ``devices="auto"``
- `PR16184`_

* - used Trainer’s flag ``accumulate_grad_batches`` with a scheduling dictionary value
Expand Down

0 comments on commit cf3553c

Please sign in to comment.