Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

docformatter update to 1.5 #16267

Closed
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ repos:
name: Upgrade code

- repo: https://github.com/PyCQA/docformatter
rev: v1.4
rev: v1.5.0
hooks:
- id: docformatter
args: [--in-place, --wrap-summaries=115, --wrap-descriptions=120]
Expand Down
2 changes: 1 addition & 1 deletion src/lightning_app/api/http_methods.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ def _signature_proxy_function():

@dataclass
class _FastApiMockRequest:
"""This class is meant to mock FastAPI Request class that isn't pickle-able.
"""This class is meant to mock FastAPI Request class that isn't pickle- able.

If a user relies on FastAPI Request annotation, the Lightning framework
patches the annotation before pickling and replace them right after.
Expand Down
2 changes: 1 addition & 1 deletion src/lightning_app/cli/cmd_ssh_keys.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ def as_table(self) -> Table:


class _SSHKeyManager:
"""_SSHKeyManager implements API calls specific to Lightning AI SSH-Keys."""
"""_SSHKeyManager implements API calls specific to Lightning AI SSH- Keys."""

def __init__(self) -> None:
self.api_client = LightningClient()
Expand Down
2 changes: 1 addition & 1 deletion src/lightning_app/components/multi_node/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ def __init__(
*work_args: Any,
**work_kwargs: Any,
) -> None:
"""This component enables performing distributed multi-node multi-device training.
"""This component enables performing distributed multi-node multi- device training.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not think that these kinds of changes are correct...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, indeed. I used pre-commit to run auto-formatting, but looks like something wrong with my environment, since it's not what I had expected. I'll check it.


Example::

Expand Down
2 changes: 1 addition & 1 deletion src/lightning_app/components/training.py
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ def __init__(
script_runner: Type[TracerPythonScript] = PyTorchLightningScriptRunner,
**script_runner_kwargs,
):
"""This component enables performing distributed multi-node multi-device training.
"""This component enables performing distributed multi-node multi- device training.

Example::

Expand Down
2 changes: 1 addition & 1 deletion src/lightning_app/core/work.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ def __init__(
run_once: Optional[bool] = None, # TODO: Remove run_once
start_with_flow: bool = True,
):
"""LightningWork, or Work in short, is a building block for long-running jobs.
"""LightningWork, or Work in short, is a building block for long- running jobs.

The LightningApp runs its :class:`~lightning_app.core.flow.LightningFlow` component
within an infinite loop and track the ``LightningWork`` status update.
Expand Down
3 changes: 2 additions & 1 deletion src/lightning_app/runners/cloud.py
Original file line number Diff line number Diff line change
Expand Up @@ -205,7 +205,8 @@ def dispatch(
open_ui: bool = True,
**kwargs: Any,
) -> None:
"""Method to dispatch and run the :class:`~lightning_app.core.app.LightningApp` in the cloud."""
"""Method to dispatch and run the
:class:`~lightning_app.core.app.LightningApp` in the cloud."""
# not user facing error ideally - this should never happen in normal user workflow
if not self.entrypoint_file:
raise ValueError(
Expand Down
5 changes: 3 additions & 2 deletions src/lightning_app/runners/multiprocess.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,9 @@
class MultiProcessRuntime(Runtime):
"""Runtime to launch the LightningApp into multiple processes.

The MultiProcessRuntime will generate 1 process for each :class:`~lightning_app.core.work.LightningWork` and attach
queues to enable communication between the different processes.
The MultiProcessRuntime will generate 1 process for each
:class:`~lightning_app.core.work.LightningWork` and attach queues to
enable communication between the different processes.
"""

backend: Union[str, Backend] = "multiprocessing"
Expand Down
4 changes: 2 additions & 2 deletions src/lightning_app/source_code/tar.py
Original file line number Diff line number Diff line change
Expand Up @@ -59,8 +59,8 @@ def _get_split_size(
"""Calculate the split size we should use to split the multipart upload of an object to a bucket. We are
limited to 1000 max parts as the way we are using ListMultipartUploads. More info
https://github.com/gridai/grid/pull/5267
https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html#mpu-process
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html
https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html#mpu- process
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUp loads.html
https://github.com/psf/requests/issues/2717#issuecomment-724725392 Python or requests has a limit of 2**31
bytes for a single file upload.

Expand Down
3 changes: 2 additions & 1 deletion src/lightning_app/testing/helpers.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,8 @@ def _run_script(filepath):


class _RunIf:
"""RunIf wrapper for simple marking specific cases, fully compatible with pytest.mark::
"""RunIf wrapper for simple marking specific cases, fully compatible with
pytest.mark::

@RunIf(...)
@pytest.mark.parametrize("arg1", [1, 2.0])
Expand Down
5 changes: 3 additions & 2 deletions src/lightning_app/utilities/component.py
Original file line number Diff line number Diff line change
Expand Up @@ -117,8 +117,9 @@ def _is_frontend_context() -> bool:
def _context(ctx: str) -> Generator[None, None, None]:
"""Set the global component context for the block below this context manager.

The context is used to determine whether the current process is running for a LightningFlow or for a LightningWork.
See also :func:`_get_context`, :func:`_set_context`. For internal use only.
The context is used to determine whether the current process is
running for a LightningFlow or for a LightningWork. See also
:func:`_get_context`, :func:`_set_context`. For internal use only.
"""
prev = _get_context()
_set_context(ctx)
Expand Down
4 changes: 2 additions & 2 deletions src/lightning_app/utilities/load_app.py
Original file line number Diff line number Diff line change
Expand Up @@ -140,8 +140,8 @@ def _patch_sys_argv():
"""This function modifies the ``sys.argv`` by extracting the arguments after ``--app_args`` and removed
everything else before executing the user app script.

The command: ``lightning run app app.py --without-server --app_args --use_gpu --env ...`` will be converted into
``app.py --use_gpu``
The command: ``lightning run app app.py --without-server --app_args
--use_gpu --env ...`` will be converted into ``app.py --use_gpu``
"""
from lightning_app.cli.lightning_cli import run_app

Expand Down
2 changes: 1 addition & 1 deletion src/lightning_app/utilities/tree.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@


def breadth_first(root: "Component", types: Type["ComponentTuple"] = None):
"""Returns a generator that walks through the tree of components breadth-first.
"""Returns a generator that walks through the tree of components breadth- first.

Arguments:
root: The root component of the tree
Expand Down
15 changes: 12 additions & 3 deletions src/lightning_fabric/accelerators/cuda.py
Original file line number Diff line number Diff line change
Expand Up @@ -184,7 +184,10 @@ def is_cuda_available() -> bool:

# TODO: Remove once minimum supported PyTorch version is 1.13
def _parse_visible_devices() -> Set[int]:
"""Implementation copied from upstream: https://github.com/pytorch/pytorch/pull/84879."""
"""Implementation copied from upstream:

https://github.com/pytorch/pytorch/pull/84879.
"""
var = os.getenv("CUDA_VISIBLE_DEVICES")
if var is None:
return {x for x in range(64)}
Expand All @@ -210,7 +213,10 @@ def _strtoul(s: str) -> int:

# TODO: Remove once minimum supported PyTorch version is 1.13
def _raw_device_count_nvml() -> int:
"""Implementation copied from upstream: https://github.com/pytorch/pytorch/pull/84879."""
"""Implementation copied from upstream:

https://github.com/pytorch/pytorch/pull/84879.
"""
from ctypes import c_int, CDLL

nvml_h = CDLL("libnvidia-ml.so.1")
Expand All @@ -229,7 +235,10 @@ def _raw_device_count_nvml() -> int:

# TODO: Remove once minimum supported PyTorch version is 1.13
def _device_count_nvml() -> int:
"""Implementation copied from upstream: https://github.com/pytorch/pytorch/pull/84879."""
"""Implementation copied from upstream:

https://github.com/pytorch/pytorch/pull/84879.
"""
try:
raw_cnt = _raw_device_count_nvml()
if raw_cnt <= 0:
Expand Down
5 changes: 3 additions & 2 deletions src/lightning_fabric/plugins/environments/kubeflow.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,9 @@
class KubeflowEnvironment(ClusterEnvironment):
"""Environment for distributed training using the `PyTorchJob`_ operator from `Kubeflow`_

.. _PyTorchJob: https://www.kubeflow.org/docs/components/training/pytorch/
.. _Kubeflow: https://www.kubeflow.org
.. _PyTorchJob:
https://www.kubeflow.org/docs/components/training/pytorch/ ..
_Kubeflow: https://www.kubeflow.org
"""

@property
Expand Down
2 changes: 1 addition & 1 deletion src/lightning_fabric/plugins/environments/slurm.py
Original file line number Diff line number Diff line change
Expand Up @@ -175,7 +175,7 @@ def _validate_srun_variables() -> None:
message.

Right now, we only check for the most common user errors. See `the srun docs
<https://slurm.schedmd.com/srun.html>`_ for a complete list of supported srun variables.
<https://slurm.schedmd.com/srun.html>`_ foracomplete list of supported srun variables.
"""
ntasks = int(os.environ.get("SLURM_NTASKS", "1"))
if ntasks > 1 and "SLURM_NTASKS_PER_NODE" not in os.environ:
Expand Down
5 changes: 4 additions & 1 deletion src/lightning_fabric/plugins/environments/torchelastic.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,10 @@


class TorchElasticEnvironment(ClusterEnvironment):
"""Environment for fault-tolerant and elastic training with `torchelastic <https://pytorch.org/elastic/>`_"""
"""Environment for fault-tolerant and elastic training with `torchelastic.

<https://pytorch.org/elastic/>`_
"""

@property
def creates_processes_externally(self) -> bool:
Expand Down
9 changes: 6 additions & 3 deletions src/lightning_fabric/plugins/environments/xla.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,10 +22,13 @@


class XLAEnvironment(ClusterEnvironment):
"""Cluster environment for training on a TPU Pod with the `PyTorch/XLA <https://pytorch.org/xla>`_ library.
"""Cluster environment for training on a TPU Pod with the `PyTorch/XLA.

A list of environment variables set by XLA can be found
`here <https://github.com/pytorch/xla/blob/master/torch_xla/core/xla_env_vars.py>`_.
<https://pytorch.org/xla>`_ library.

A list of environment variables set by XLA can be found `here <https
://github.com/pytorch/xla/blob/master/torch_xla/core/xla_env_vars.py
>`_.
"""

def __init__(self, *args: Any, **kwargs: Any) -> None:
Expand Down
4 changes: 3 additions & 1 deletion src/lightning_fabric/strategies/ddp.py
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,9 @@ def setup_environment(self) -> None:
super().setup_environment()

def setup_module(self, module: Module) -> DistributedDataParallel:
"""Wraps the model into a :class:`~torch.nn.parallel.distributed.DistributedDataParallel` module."""
"""Wraps the model into a
:class:`~torch.nn.parallel.distributed.DistributedDataParallel`
module."""
return DistributedDataParallel(module=module, device_ids=self._determine_ddp_device_ids(), **self._ddp_kwargs)

def module_to_device(self, module: Module) -> None:
Expand Down
4 changes: 2 additions & 2 deletions src/lightning_fabric/strategies/deepspeed.py
Original file line number Diff line number Diff line change
Expand Up @@ -93,8 +93,8 @@ def __init__(
process_group_backend: Optional[str] = None,
) -> None:
"""Provides capabilities to run training using the DeepSpeed library, with training optimizations for large
billion parameter models. `For more information: https://pytorch-
lightning.readthedocs.io/en/stable/advanced/model_parallel.html#deepspeed`.
billion parameter models. `For more information: https://pytorch- lightning.readthedocs.io/en/stable/a
dvanced/model_parallel.html#deepspeed`.

.. warning:: ``DeepSpeedStrategy`` is in beta and subject to change.

Expand Down
3 changes: 2 additions & 1 deletion src/lightning_fabric/strategies/dp.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,8 @@ def root_device(self) -> torch.device:
return self.parallel_devices[0]

def setup_module(self, module: Module) -> DataParallel:
"""Wraps the given model into a :class:`~torch.nn.parallel.DataParallel` module."""
"""Wraps the given model into a
:class:`~torch.nn.parallel.DataParallel` module."""
return DataParallel(module=module, device_ids=self.parallel_devices)

def module_to_device(self, module: Module) -> None:
Expand Down
4 changes: 2 additions & 2 deletions src/lightning_fabric/strategies/fairscale.py
Original file line number Diff line number Diff line change
Expand Up @@ -129,8 +129,8 @@ def _reinit_optimizers_with_oss(optimizers: List[Optimizer], precision: Precisio
class _FairscaleBackwardSyncControl(_BackwardSyncControl):
@contextmanager
def no_backward_sync(self, module: Module) -> Generator:
"""Blocks gradient synchronization inside the :class:`~fairscale.nn.data_parallel.ShardedDataParallel`
wrapper."""
"""Blocks gradient synchronization inside the
:class:`~fairscale.nn.data_parallel.ShardedDataParallel` wrapper."""
if not isinstance(module, ShardedDataParallel):
raise TypeError(
"Blocking backward sync is only possible if the module passed to"
Expand Down
8 changes: 5 additions & 3 deletions src/lightning_fabric/strategies/fsdp.py
Original file line number Diff line number Diff line change
Expand Up @@ -210,9 +210,11 @@ def setup_module(self, module: Module) -> "FullyShardedDataParallel":
def setup_optimizer(self, optimizer: Optimizer) -> Optimizer:
"""Set up an optimizer for a model wrapped with FSDP.

This setup method doesn't modify the optimizer or wrap the optimizer. The only thing it currently does is verify
that the optimizer was created after the model was wrapped with :meth:`setup_module` with a reference to the
flattened parameters.
This setup method doesn't modify the optimizer or wrap the
optimizer. The only thing it currently does is verify that the
optimizer was created after the model was wrapped with
:meth:`setup_module` with a reference to the flattened
parameters.
"""
from torch.distributed.fsdp import FlatParameter

Expand Down
3 changes: 1 addition & 2 deletions src/lightning_fabric/strategies/launchers/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,7 @@


class _Launcher(ABC):
r"""
Abstract base class for all Launchers.
r"""Abstract base class for all Launchers.

Launchers are responsible for the creation and instrumentation of new processes so that the
:class:`~lightning_fabric.strategies.strategy.Strategy` can set up communication between all them.
Expand Down
4 changes: 2 additions & 2 deletions src/lightning_fabric/strategies/launchers/xla.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,8 @@


class _XLALauncher(_Launcher):
r"""Launches processes that run a given function in parallel on XLA supported hardware, and joins them all at the
end.
r"""Launches processes that run a given function in parallel on XLA supported hardware, and joins them all at
the end.

The main process in which this launcher is invoked creates N so-called worker processes (using the
`torch_xla` :func:`xmp.spawn`) that run the given function.
Expand Down
21 changes: 13 additions & 8 deletions src/lightning_fabric/strategies/strategy.py
Original file line number Diff line number Diff line change
Expand Up @@ -117,8 +117,9 @@ def setup_module_and_optimizers(
) -> Tuple[Module, List[Optimizer]]:
"""Set up a model and multiple optimizers together.

The returned objects are expected to be in the same order they were passed in. The default implementation will
call :meth:`setup_module` and :meth:`setup_optimizer` on the inputs.
The returned objects are expected to be in the same order they
were passed in. The default implementation will call
:meth:`setup_module` and :meth:`setup_optimizer` on the inputs.
"""
module = self.setup_module(module)
optimizers = [self.setup_optimizer(optimizer) for optimizer in optimizers]
Expand Down Expand Up @@ -297,10 +298,12 @@ def _err_msg_joint_setup_required(self) -> str:

class _BackwardSyncControl(ABC):
"""Interface for any :class:`Strategy` that wants to offer a functionality to enable or disable gradient
synchronization during/after back-propagation.
synchronization during/after back- propagation.

The most common use-case is gradient accumulation. If a :class:`Strategy` implements this interface, the user can
implement their gradient accumulation loop very efficiently by disabling redundant gradient synchronization.
The most common use-case is gradient accumulation. If a
:class:`Strategy` implements this interface, the user can implement
their gradient accumulation loop very efficiently by disabling
redundant gradient synchronization.
"""

@contextmanager
Expand All @@ -319,9 +322,11 @@ class _Sharded(ABC):
@abstractmethod
@contextmanager
def module_sharded_context(self) -> Generator:
"""A context manager that goes over the instantiation of an :class:`torch.nn.Module` and handles sharding
of parameters on creation.
"""A context manager that goes over the instantiation of an
:class:`torch.nn.Module` and handles sharding of parameters on
creation.

By sharding layers directly on instantiation, one can reduce peak memory usage and initialization time.
By sharding layers directly on instantiation, one can reduce
peak memory usage and initialization time.
"""
yield
4 changes: 2 additions & 2 deletions src/lightning_fabric/strategies/xla.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,8 +39,8 @@


class XLAStrategy(ParallelStrategy):
"""Strategy for training multiple TPU devices using the :func:`torch_xla.distributed.xla_multiprocessing.spawn`
method."""
"""Strategy for training multiple TPU devices using the
:func:`torch_xla.distributed.xla_multiprocessing.spawn` method."""

def __init__(
self,
Expand Down
4 changes: 3 additions & 1 deletion src/lightning_fabric/utilities/logger.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,9 @@ def _convert_params(params: Optional[Union[Dict[str, Any], Namespace]]) -> Dict[


def _sanitize_callable_params(params: Dict[str, Any]) -> Dict[str, Any]:
"""Sanitize callable params dict, e.g. ``{'a': <function_**** at 0x****>} -> {'a': 'function_****'}``.
"""Sanitize callable params dict, e.g. ``{'a': <function_**** at 0x****>}

-> {'a': 'function_****'}``.

Args:
params: Dictionary containing the hyperparameters
Expand Down