Skip to content

Commit

Permalink
Merge pull request #1864 from djhoese/doc-save-datasets-wishlist-only
Browse files Browse the repository at this point in the history
Update Scene.save_datasets to clarify what will be saved
  • Loading branch information
djhoese committed Oct 25, 2021
2 parents ab7f3b1 + da7d133 commit d9ef2d5
Showing 1 changed file with 10 additions and 2 deletions.
12 changes: 10 additions & 2 deletions satpy/scene.py
Expand Up @@ -1040,7 +1040,13 @@ def save_dataset(self, dataset_id, filename=None, writer=None,

def save_datasets(self, writer=None, filename=None, datasets=None, compute=True,
**kwargs):
"""Save all the datasets present in a scene to disk using ``writer``.
"""Save requested datasets present in a scene to disk using ``writer``.
Note that dependency datasets (those loaded solely to create another
and not requested explicitly) that may be contained in this Scene will
not be saved by default. The default datasets are those explicitly
requested through ``.load`` and exist in the Scene currently. Specify
dependency datasets using the ``datasets`` keyword argument.
Args:
writer (str): Name of writer to use when writing data to disk.
Expand All @@ -1051,7 +1057,9 @@ def save_datasets(self, writer=None, filename=None, datasets=None, compute=True,
dataset to. It may include string formatting
patterns that will be filled in by dataset
attributes.
datasets (iterable): Limit written products to these datasets
datasets (iterable): Limit written products to these datasets.
Elements can be string name, a wavelength as a number, a
DataID, or DataQuery object.
compute (bool): If `True` (default), compute all of the saves to
disk. If `False` then the return value is either a
:doc:`dask:delayed` object or two lists to be passed to
Expand Down

0 comments on commit d9ef2d5

Please sign in to comment.