Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DM-42116: Fix numpydoc doc strings and enable validation #395

Merged
merged 11 commits into from
Dec 13, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
15 changes: 15 additions & 0 deletions .github/workflows/docstyle.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,3 +11,18 @@ jobs:
uses: lsst/rubin_workflows/.github/workflows/docstyle.yaml@main
with:
args: "python/"
numpydoc:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3

- name: Set up Python
uses: actions/setup-python@v4

- name: Install numpydoc
run: |
python -m pip install --upgrade pip
python -m pip install numpydoc

- name: Validate docstrings
run: python -m numpydoc.hooks.validate_docstrings $(find python -name "*.py")
8 changes: 6 additions & 2 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -16,12 +16,16 @@ repos:
# https://pre-commit.com/#top_level-default_language_version
language_version: python3.11
- repo: https://github.com/pycqa/isort
rev: 5.12.0
rev: 5.13.1
hooks:
- id: isort
name: isort (python)
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.1.6
rev: v0.1.7
hooks:
- id: ruff
- repo: https://github.com/numpy/numpydoc
rev: "v1.6.0"
hooks:
- id: numpydoc-validation
2 changes: 1 addition & 1 deletion doc/changes/DM-34696.misc.rst
Original file line number Diff line number Diff line change
@@ -1 +1 @@
* Added workarounds for mypy errors in `lsst.pipe.base.Struct` and `lsst.pipe.base.PipelineTask`.
* Added workarounds for mypy errors in `lsst.pipe.base.Struct` and `lsst.pipe.base.PipelineTask`.
20 changes: 20 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -177,3 +177,23 @@ convention = "numpy"
[tool.ruff.per-file-ignores]
# Do not need to add class docs to the example classes.
"doc/_static/pipe_base/PipelineTask_Examples/*.py" = ["D101"]

[tool.numpydoc_validation]
checks = [
"all", # All except the rules listed below.
"SA01", # See Also section.
"EX01", # Example section.
"SS06", # Summary can go into second line.
"GL01", # Summary text can start on same line as """
"GL08", # Do not require docstring.
"ES01", # No extended summary required.
"RT01", # Unfortunately our @property trigger this.
"RT02", # Does not want named return value. DM style says we do.
"SS05", # pydocstyle is better at finding infinitive verb.
]
exclude = [
'^__init__$',
"^test_.*", # Do not test docstrings in test code.
'^commands\.', # Click docstrings, not numpydoc
'\._[a-zA-Z_]+$', # Private methods.
]
15 changes: 12 additions & 3 deletions python/lsst/pipe/base/_datasetQueryConstraints.py
Original file line number Diff line number Diff line change
Expand Up @@ -80,9 +80,18 @@ def fromExpression(cls, expression: str) -> "DatasetQueryConstraintVariant":
"""Select and return the correct Variant that corresponds to the input
expression.

Valid values are ``all`` for all inputs dataset types in pipeline,
``off`` to not consider dataset type existence as a constraint, single
or comma-separated list of dataset type names.
Parameters
----------
expression : `str`
Input expression. Valid values are ``all`` for all inputs dataset
types in pipeline, ``off`` to not consider dataset type existence
as a constraint, single or comma-separated list of dataset type
names.

Returns
-------
variant : `DatasetQueryConstraintVariant`
Correct variant for this expression.
"""
if not isinstance(expression, str):
raise ValueError("Expression must be a string")
Expand Down
23 changes: 20 additions & 3 deletions python/lsst/pipe/base/_dataset_handle.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,9 +51,26 @@ def _default_dataId() -> DataCoordinate:
class InMemoryDatasetHandle:
"""An in-memory version of a `~lsst.daf.butler.DeferredDatasetHandle`.

If ``dataId`` is not specified, a default empty dataId will be constructed.
If ``kwargs`` are provided without specifying a ``dataId``, those
parameters will be converted into a dataId-like entity.
Parameters
----------
inMemoryDataset : `~typing.Any`
The dataset to be used by this handle.
storageClass : `~lsst.daf.butler.StorageClass` or `None`, optional
The storage class associated with the in-memory dataset. If `None`
and if a storage class is needed, an attempt will be made to work one
out from the underlying python type.
parameters : `dict` [`str`, `~typing.Any`]
Parameters to be used with `get`.
dataId : `~lsst.daf.butler.DataId` or `None`, optional
The dataId associated with this dataset. Only used for compatibility
with the Butler implementation. Can be used for logging messages
by calling code. If ``dataId`` is not specified, a default empty
dataId will be constructed.
copy : `bool`, optional
Whether to copy on `get` or not.
**kwargs : `~typing.Any`
If ``kwargs`` are provided without specifying a ``dataId``, those
parameters will be converted into a dataId-like entity.
"""

_empty = DataCoordinate.make_empty(DimensionUniverse())
Expand Down
3 changes: 1 addition & 2 deletions python/lsst/pipe/base/_instrument.py
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,6 @@ def register(self, registry: Registry, *, update: bool = False) -> None:
registry.syncDimensionData("instrument", ...)
registry.syncDimensionData("detector", ...)
self.registerFilters(registry)

"""
raise NotImplementedError()

Expand Down Expand Up @@ -227,7 +226,7 @@ def from_string(

See Also
--------
Instrument.fromName
Instrument.fromName : Constructing Instrument from a name.
"""
if "." not in name and registry is not None:
try:
Expand Down
9 changes: 5 additions & 4 deletions python/lsst/pipe/base/_quantumContext.py
Original file line number Diff line number Diff line change
Expand Up @@ -221,7 +221,7 @@ def _get(self, ref: DeferredDatasetRef | DatasetRef | None) -> Any:
return self.__butler.get(ref)

def _put(self, value: Any, ref: DatasetRef) -> None:
"""Store data in butler"""
"""Store data in butler."""
self._checkMembership(ref, self.allOutputs)
self.__butler.put(value, ref)

Expand All @@ -234,11 +234,11 @@ def get(
| DeferredDatasetRef
| None,
) -> Any:
"""Fetch data from the butler
"""Fetch data from the butler.

Parameters
----------
dataset
dataset : see description
This argument may either be an `InputQuantizedConnection` which
describes all the inputs of a quantum, a list of
`~lsst.daf.butler.DatasetRef`, or a single
Expand Down Expand Up @@ -348,7 +348,8 @@ def put(
only a single object need be passed. The same restriction applies
if dataset is directly a `list` of `~lsst.daf.butler.DatasetRef`
or a single `~lsst.daf.butler.DatasetRef`.
dataset
dataset : `OutputQuantizedConnection` or `list`[`DatasetRef`] \
or `DatasetRef`
This argument may either be an `InputQuantizedConnection` which
describes all the inputs of a quantum, a list of
`lsst.daf.butler.DatasetRef`, or a single
Expand Down
8 changes: 4 additions & 4 deletions python/lsst/pipe/base/_task_metadata.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@


class PropertySetLike(Protocol):
"""Protocol that looks like a ``lsst.daf.base.PropertySet``
"""Protocol that looks like a ``lsst.daf.base.PropertySet``.

Enough of the API is specified to support conversion of a
``PropertySet`` to a `TaskMetadata`.
Expand Down Expand Up @@ -164,7 +164,7 @@ def add(self, name: str, value: Any) -> None:
----------
name : `str`
Name of the metadata property.
value
value : `~typing.Any`
Metadata property value.
"""
keys = self._getKeys(name)
Expand Down Expand Up @@ -265,7 +265,7 @@ def names(self, topLevelOnly: bool | None = None) -> set[str]:

Parameters
----------
topLevelOnly : `bool` or `None`, optional
topLevelOnly : `bool` or `None`, optional
This parameter is deprecated and will be removed in the future.
If given it can only be `False`. All names in the hierarchy are
always returned.
Expand Down Expand Up @@ -416,7 +416,7 @@ def get(self, key: str, default: Any = None) -> Any:
----------
key : `str`
The key to retrieve. Can be dot-separated hierarchical.
default
default : `~typing.Any`
The value to return if the key does not exist.

Returns
Expand Down
5 changes: 5 additions & 0 deletions python/lsst/pipe/base/all_dimensions_quantum_graph_builder.py
Original file line number Diff line number Diff line change
Expand Up @@ -508,6 +508,11 @@ def from_builder(
def log_failure(self, log: LsstLogAdapter) -> None:
"""Emit a series of CRITICAL-level log message that attempts to explain
why the initial data ID query returned no rows.

Parameters
----------
log : `logging.Logger`
The logger to use to emit log messages.
"""
log.critical("Initial data ID query returned no rows, so QuantumGraph will be empty.")
for message in self.common_data_ids.explain_no_results():
Expand Down
18 changes: 15 additions & 3 deletions python/lsst/pipe/base/cmdLineTask.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,20 +24,25 @@
__all__: list[str] = []

import contextlib
import logging
from collections.abc import Generator
from typing import TYPE_CHECKING

from deprecated.sphinx import deprecated

if TYPE_CHECKING:
import cProfile


@deprecated(
reason="Replaced by lsst.utils.timer.profile(). Will be removed after v26.0",
version="v25.0",
category=FutureWarning,
)
@contextlib.contextmanager
def profile(filename, log=None):
def profile(filename: str, log: logging.Logger | None = None) -> Generator[cProfile.Profile, None, None]:
"""Context manager for profiling with cProfile.


Parameters
----------
filename : `str`
Expand All @@ -46,6 +51,13 @@ def profile(filename, log=None):
log : `logging.Logger`, optional
Log object for logging the profile operations.

Yields
------
`cProfile.Profile`
Profiling object.

Notes
-----
If profiling is enabled, the context manager returns the cProfile.Profile
object (otherwise it returns None), which allows additional control over
profiling. You can obtain this using the "as" clause, e.g.:
Expand All @@ -64,7 +76,7 @@ def profile(filename, log=None):
"""
if not filename:
# Nothing to do
yield
yield None # type: ignore
return
from cProfile import Profile

Expand Down
15 changes: 13 additions & 2 deletions python/lsst/pipe/base/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ def __set__(


class PipelineTaskConfigMeta(pexConfig.ConfigMeta):
"""Metaclass used in the creation of PipelineTaskConfig classes
"""Metaclass used in the creation of PipelineTaskConfig classes.

This metaclass ensures a `PipelineTaskConnections` class is specified in
the class construction parameters with a parameter name of
Expand All @@ -116,6 +116,17 @@ class PipelineTaskConfigMeta(pexConfig.ConfigMeta):
Finally the newly constructed config class (not an instance of it) is
assigned to the Config class under construction with the attribute name
``ConnectionsConfigClass``.

Parameters
----------
name : `str`
Name of config.
bases : `~collections.abc.Collection`
Base classes.
dct : `~collections.abc.Mapping`
Parameter dict.
**kwargs : `~typing.Any`
Additional parameters.
"""

def __new__(
Expand Down Expand Up @@ -188,7 +199,7 @@ def __init__(


class PipelineTaskConfig(pexConfig.Config, metaclass=PipelineTaskConfigMeta):
"""Configuration class for `PipelineTask`
"""Configuration class for `PipelineTask`.

This Configuration class functions in largely the same manner as any other
derived from `lsst.pex.config.Config`. The only difference is in how it is
Expand Down