Skip to content

Commit

Permalink
Merge pull request #6029 from niboshi/doctest-single-gpu-environment
Browse files Browse the repository at this point in the history
Allow doctest to run in single-GPU environment
  • Loading branch information
okuta committed Jan 19, 2019
2 parents b5ceb1f + de455cb commit 930e278
Show file tree
Hide file tree
Showing 6 changed files with 38 additions and 30 deletions.
3 changes: 1 addition & 2 deletions chainer/links/theano/theano_function.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,8 +39,7 @@ class TheanoFunction(link.Link):
.. doctest::
# See chainer/chainer#5997
:skipif: os.environ.get('READTHEDOCS') != 'True' \
and chainer.testing.is_requires_satisfied( \
:skipif: doctest_helper.skipif_requires_satisfied( \
'Theano<=1.0.3', 'numpy>=1.16.0')
>>> import theano
Expand Down
1 change: 0 additions & 1 deletion chainer/testing/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@
from chainer.testing.backend import inject_backend_tests # NOQA
from chainer.testing.distribution_test import distribution_unittest # NOQA
from chainer.testing.helper import assert_warns # NOQA
from chainer.testing.helper import is_requires_satisfied # NOQA
from chainer.testing.helper import patch # NOQA
from chainer.testing.helper import with_requires # NOQA
from chainer.testing.helper import without_requires # NOQA
Expand Down
26 changes: 26 additions & 0 deletions chainer/testing/doctest_helper.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
import os
import pkg_resources


_gpu_limit = int(os.getenv('CHAINER_TEST_GPU_LIMIT', '-1'))


def skipif(condition):
# In the readthedocs build, doctest should never be skipped, because
# otherwise the code would disappear from the documentation.
if os.environ.get('READTHEDOCS') == 'True':
return False
return condition


def skipif_requires_satisfied(*requirements):
ws = pkg_resources.WorkingSet()
try:
ws.require(*requirements)
except pkg_resources.ResolutionError:
return False
return skipif(True)


def skipif_not_enough_cuda_devices(device_count):
return skipif(0 <= _gpu_limit < device_count)
18 changes: 0 additions & 18 deletions chainer/testing/helper.py
Original file line number Diff line number Diff line change
Expand Up @@ -76,24 +76,6 @@ def without_requires(*requirements):
return unittest.skipIf(skip, msg)


def is_requires_satisfied(*requirements):
"""Returns whether the given requirments are satisfied.
Args:
requirements: A list of string representing the requirements.
Returns:
bool: A boolean indicating whether the given requirements are
satisfied.
"""
ws = pkg_resources.WorkingSet()
try:
ws.require(*requirements)
except pkg_resources.ResolutionError:
return False
return True


@contextlib.contextmanager
def assert_warns(expected):
with warnings.catch_warnings(record=True) as w:
Expand Down
1 change: 1 addition & 0 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -354,6 +354,7 @@
from chainer import Link, Chain, ChainList
import chainer.functions as F
import chainer.links as L
from chainer.testing import doctest_helper
from chainer.training import extensions
import chainerx
np.random.seed(0)
Expand Down
19 changes: 10 additions & 9 deletions docs/source/guides/gpu.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,15 +23,6 @@ After reading this section, you will be able to:

import cupy

.. testcode::
:hide:

try:
with cupy.cuda.Device(1):
pass
except cupy.cuda.runtime.CUDARuntimeError:
raise RuntimeError('doctest in this documentation requires 2 GPUs') from None

Relationship between Chainer and CuPy
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Expand Down Expand Up @@ -76,6 +67,7 @@ The allocation takes place on the current device by default.
The current device can be changed by :class:`cupy.cuda.Device` object as follows:

.. testcode::
:skipif: doctest_helper.skipif_not_enough_cuda_devices(2)

with cupy.cuda.Device(1):
x_on_gpu1 = cupy.array([1, 2, 3, 4, 5])
Expand All @@ -87,13 +79,15 @@ Chainer provides some convenient functions to automatically switch and choose th
For example, the :func:`chainer.backends.cuda.to_gpu` function copies a :class:`numpy.ndarray` object to a specified device:

.. testcode::
:skipif: doctest_helper.skipif_not_enough_cuda_devices(2)

x_cpu = np.ones((5, 4, 3), dtype=np.float32)
x_gpu = cuda.to_gpu(x_cpu, device=1)

It is equivalent to the following code using CuPy:

.. testcode::
:skipif: doctest_helper.skipif_not_enough_cuda_devices(2)

x_cpu = np.ones((5, 4, 3), dtype=np.float32)
with cupy.cuda.Device(1):
Expand All @@ -102,12 +96,14 @@ It is equivalent to the following code using CuPy:
Moving a device array to the host can be done by :func:`chainer.backends.cuda.to_cpu` as follows:

.. testcode::
:skipif: doctest_helper.skipif_not_enough_cuda_devices(2)

x_cpu = cuda.to_cpu(x_gpu)

It is equivalent to the following code using CuPy:

.. testcode::
:skipif: doctest_helper.skipif_not_enough_cuda_devices(2)

with x_gpu.device:
x_cpu = x_gpu.get()
Expand All @@ -129,6 +125,7 @@ The dummy device object also supports *with* statements like the above example b
Here are some other examples:

.. testcode::
:skipif: doctest_helper.skipif_not_enough_cuda_devices(2)

cuda.get_device_from_id(1).use()
x_gpu1 = cupy.empty((4, 3), dtype=cupy.float32)
Expand Down Expand Up @@ -371,6 +368,7 @@ The :meth:`Link.to_gpu` method runs in place, so we cannot use it to make a copy
In order to make a copy, we can use :meth:`Link.copy` method.

.. testcode::
:skipif: doctest_helper.skipif_not_enough_cuda_devices(2)

model_1 = model_0.copy()
model_0.to_gpu(0)
Expand All @@ -382,6 +380,7 @@ The :meth:`Link.copy` method copies the link into another instance.
Then, set up an optimizer:

.. testcode::
:skipif: doctest_helper.skipif_not_enough_cuda_devices(2)

optimizer = optimizers.SGD()
optimizer.setup(model_0)
Expand All @@ -392,6 +391,7 @@ Before its update, gradients of ``model_1`` must be aggregated to those of ``mod
Then, we can write a data-parallel learning loop as follows:

.. testcode::
:skipif: doctest_helper.skipif_not_enough_cuda_devices(2)

batchsize = 100
datasize = len(x_train)
Expand Down Expand Up @@ -423,6 +423,7 @@ Then, we can write a data-parallel learning loop as follows:

.. testoutput::
:hide:
:skipif: doctest_helper.skipif_not_enough_cuda_devices(2)

epoch 0
...
Expand Down

0 comments on commit 930e278

Please sign in to comment.