Skip to content

Commit d910cc5

Browse files
authored
docs: dont mock imports when running sphinx doctest (Lightning-AI#2396)
* skip if no amp * dont mock when doctesting * install extra
1 parent 75f0a20 commit d910cc5

File tree

4 files changed

+12
-1
lines changed

4 files changed

+12
-1
lines changed

.github/workflows/docs-checks.yml

+3
Original file line numberDiff line numberDiff line change
@@ -40,11 +40,14 @@ jobs:
4040
run: |
4141
# python -m pip install --upgrade --user pip
4242
pip install -r requirements/base.txt -U -f https://download.pytorch.org/whl/torch_stable.html -q
43+
pip install -r requirements/extra.txt
4344
pip install -r requirements/docs.txt
4445
python --version ; pip --version ; pip list
4546
shell: bash
4647

4748
- name: Test Documentation
49+
env:
50+
SPHINX_MOCK_REQUIREMENTS: 0
4851
run: |
4952
# First run the same pipeline as Read-The-Docs
5053
apt-get update && sudo apt-get install -y cmake

docs/source/apex.rst

+3
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,7 @@ Native torch
2121
When using PyTorch 1.6+ Lightning uses the native amp implementation to support 16-bit.
2222

2323
.. testcode::
24+
:skipif: not APEX_AVAILABLE and not NATIVE_AMP_AVALAIBLE
2425

2526
# turn on 16-bit
2627
trainer = Trainer(precision=16)
@@ -62,6 +63,7 @@ Enable 16-bit
6263
^^^^^^^^^^^^^
6364

6465
.. testcode::
66+
:skipif: not APEX_AVAILABLE and not NATIVE_AMP_AVALAIBLE
6567

6668
# turn on 16-bit
6769
trainer = Trainer(amp_level='O2', precision=16)
@@ -76,6 +78,7 @@ TPU 16-bit
7678
16-bit on TPus is much simpler. To use 16-bit with TPUs set precision to 16 when using the tpu flag
7779

7880
.. testcode::
81+
:skipif: not XLA_AVAILABLE
7982

8083
# DEFAULT
8184
trainer = Trainer(tpu_cores=8, precision=32)

docs/source/conf.py

+5-1
Original file line numberDiff line numberDiff line change
@@ -416,7 +416,11 @@ def find_source():
416416
import os
417417
import torch
418418
419-
TORCHVISION_AVAILABLE = importlib.util.find_spec('torchvision')
419+
from pytorch_lightning.utilities import NATIVE_AMP_AVALAIBLE
420+
APEX_AVAILABLE = importlib.util.find_spec("apex") is not None
421+
XLA_AVAILABLE = importlib.util.find_spec("torch_xla") is not None
422+
TORCHVISION_AVAILABLE = importlib.util.find_spec("torchvision") is not None
423+
420424
421425
"""
422426
coverage_skip_undoc_in_source = True

pytorch_lightning/trainer/__init__.py

+1
Original file line numberDiff line numberDiff line change
@@ -721,6 +721,7 @@ def on_train_end(self, trainer, pl_module):
721721
will still show torch.float32.
722722
723723
.. testcode::
724+
:skipif: not APEX_AVAILABLE and not NATIVE_AMP_AVALAIBLE
724725
725726
# default used by the Trainer
726727
trainer = Trainer(precision=32)

0 commit comments

Comments
 (0)