Skip to content

Commit

Permalink
[Docs] Remove unnecessary testoutputs (ray-project#39141) (ray-projec…
Browse files Browse the repository at this point in the history
…t#39320)

ray-project/pytest-sphinx#5 changed our tooling to only check outputs if you provide a testoutput (previously, if you didn't provide a testoutput, then our tooling would expect your testcode to produce no output). As a follow up, this PR removes unnecessary testoutput blocks.

Signed-off-by: Balaji Veeramani <balaji@anyscale.com>
Co-authored-by: Balaji Veeramani <balaji@anyscale.com>
  • Loading branch information
GeneDer and bveeramani committed Sep 6, 2023
1 parent e32c12a commit 54f9bf1
Show file tree
Hide file tree
Showing 5 changed files with 2 additions and 44 deletions.
4 changes: 0 additions & 4 deletions doc/source/data/batch_inference.rst
Original file line number Diff line number Diff line change
Expand Up @@ -454,10 +454,6 @@ Models that have been trained with :ref:`Ray Train <train-docs>` can then be use
)
result = trainer.fit()

.. testoutput::
:hide:

...

**Step 2:** Extract the :class:`Checkpoint <ray.train.Checkpoint>` from the training :class:`Result <ray.train.Result>`.

Expand Down
4 changes: 0 additions & 4 deletions doc/source/data/working-with-pytorch.rst
Original file line number Diff line number Diff line change
Expand Up @@ -77,10 +77,6 @@ Ray Data integrates with :ref:`Ray Train <train-docs>` for easy data ingest for
)
trainer.fit()

.. testoutput::
:hide:

...

For more details, see the :ref:`Ray Train user guide <data-ingest-torch>`.

Expand Down
8 changes: 2 additions & 6 deletions doc/source/ray-contribute/writing-code-snippets.rst
Original file line number Diff line number Diff line change
Expand Up @@ -269,17 +269,13 @@ If your output is nondeterministic and you want to display a sample output, add

0.969461416250246

If your output is hard to test and you don't want to display a sample output, use
ellipses and `:hide:`. ::
If your output is hard to test and you don't want to display a sample output, exclude
the ``testoutput``. ::

.. testcode::

print("This output is hidden and untested")

.. testoutput::
:hide:

...

------------------------------
How to test examples with GPUs
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -68,10 +68,6 @@ And I have this code:

print(ray.get(futures))

.. testoutput::
:hide:

...

then you will get a mix of True and False. If
``check_file()`` runs on the Head Node, or we're running
Expand Down
26 changes: 0 additions & 26 deletions doc/source/train/user-guides/data-loading-preprocessing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -342,10 +342,6 @@ For example, to split only the training dataset, do the following:
)
my_trainer.fit()

.. testoutput::
:hide:

...

Full customization (advanced)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Expand Down Expand Up @@ -408,10 +404,6 @@ For use cases not covered by the default config class, you can also fully custom
)
my_trainer.fit()

.. testoutput::
:hide:

...

The subclass must be serializable, since Ray Train copies it from the driver script to the driving actor of the Trainer. Ray Train calls its :meth:`configure <ray.train.DataConfig.configure>` method on the main actor of the Trainer group to create the data iterators for each worker.

Expand Down Expand Up @@ -460,11 +452,6 @@ First, randomize each :ref:`block <dataset_concept>` of your dataset via :meth:`
)
my_trainer.fit()

.. testoutput::
:hide:

...


If your model is sensitive to shuffle quality, call :meth:`Dataset.random_shuffle <ray.data.Dataset.random_shuffle>` to perform a global shuffle.

Expand Down Expand Up @@ -576,11 +563,6 @@ You can use this with Ray Train Trainers by applying them on the dataset before
print(StandardScaler.deserialize(metadata["preprocessor_pkl"]))


.. testoutput::
:hide:

...

In this example, we persist the fitted preprocessor using the ``Trainer(metadata={...})`` constructor argument. This arg specifies a dict that will available from ``TrainContext.get_metadata()`` and ``checkpoint.get_metadata()`` for checkpoints saved from the Trainer. This enables recreation of the fitted preprocessor for use for inference.

Performance tips
Expand Down Expand Up @@ -620,10 +602,6 @@ For example, the following code prefetches 10 batches at a time for each trainin
)
my_trainer.fit()

.. testoutput::
:hide:

...

.. _dataset_cache_performance:

Expand Down Expand Up @@ -669,10 +647,6 @@ Transformations that you want run per-epoch, such as randomization, should go af

# Pass train_ds to the Trainer

.. testoutput::
:hide:

...

Adding CPU-only nodes to your cluster
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Expand Down

0 comments on commit 54f9bf1

Please sign in to comment.