Skip to content

Commit

Permalink
Enable running sphinx-build on Windows (#6237)
Browse files Browse the repository at this point in the history
  • Loading branch information
stanwest committed Mar 1, 2022
1 parent 555a70e commit 0f91f05
Show file tree
Hide file tree
Showing 8 changed files with 53 additions and 32 deletions.
4 changes: 2 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,9 @@ __pycache__
.hypothesis/

# temp files from docs build
doc/*.nc
doc/auto_gallery
doc/example.nc
doc/rasm.zarr
doc/savefig

# C extensions
Expand Down Expand Up @@ -72,4 +73,3 @@ xarray/tests/data/*.grib.*.idx
Icon*

.ipynb_checkpoints
doc/rasm.zarr
4 changes: 2 additions & 2 deletions doc/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,9 +28,9 @@
print("python exec:", sys.executable)
print("sys.path:", sys.path)

if "conda" in sys.executable:
if "CONDA_DEFAULT_ENV" in os.environ or "conda" in sys.executable:
print("conda environment:")
subprocess.run(["conda", "list"])
subprocess.run([os.environ.get("CONDA_EXE", "conda"), "list"])
else:
print("pip environment:")
subprocess.run([sys.executable, "-m", "pip", "list"])
Expand Down
4 changes: 3 additions & 1 deletion doc/getting-started-guide/quick-overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -215,13 +215,15 @@ You can directly read and write xarray objects to disk using :py:meth:`~xarray.D
.. ipython:: python
ds.to_netcdf("example.nc")
xr.open_dataset("example.nc")
reopened = xr.open_dataset("example.nc")
reopened
.. ipython:: python
:suppress:
import os
reopened.close()
os.remove("example.nc")
Expand Down
7 changes: 7 additions & 0 deletions doc/internals/zarr-encoding-spec.rst
Original file line number Diff line number Diff line change
Expand Up @@ -63,3 +63,10 @@ re-open it directly with Zarr:
print(os.listdir("rasm.zarr"))
print(zgroup.tree())
dict(zgroup["Tair"].attrs)
.. ipython:: python
:suppress:
import shutil
shutil.rmtree("rasm.zarr")
26 changes: 13 additions & 13 deletions doc/user-guide/dask.rst
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,8 @@ argument to :py:func:`~xarray.open_dataset` or using the
.. ipython:: python
:suppress:
import os
import numpy as np
import pandas as pd
import xarray as xr
Expand Down Expand Up @@ -129,6 +131,11 @@ will return a ``dask.delayed`` object that can be computed later.
with ProgressBar():
results = delayed_obj.compute()
.. ipython:: python
:suppress:
os.remove("manipulated-example-data.nc") # Was not opened.
.. note::

When using Dask's distributed scheduler to write NETCDF4 files,
Expand All @@ -147,13 +154,6 @@ A dataset can also be converted to a Dask DataFrame using :py:meth:`~xarray.Data
Dask DataFrames do not support multi-indexes so the coordinate variables from the dataset are included as columns in the Dask DataFrame.

.. ipython:: python
:suppress:
import os
os.remove("example-data.nc")
os.remove("manipulated-example-data.nc")

Using Dask with xarray
----------------------
Expand Down Expand Up @@ -210,7 +210,7 @@ Dask arrays using the :py:meth:`~xarray.Dataset.persist` method:

.. ipython:: python
ds = ds.persist()
persisted = ds.persist()
:py:meth:`~xarray.Dataset.persist` is particularly useful when using a
distributed cluster because the data will be loaded into distributed memory
Expand All @@ -232,11 +232,6 @@ chunk size depends both on your data and on the operations you want to perform.
With xarray, both converting data to a Dask arrays and converting the chunk
sizes of Dask arrays is done with the :py:meth:`~xarray.Dataset.chunk` method:

.. ipython:: python
:suppress:
ds = ds.chunk({"time": 10})
.. ipython:: python
rechunked = ds.chunk({"latitude": 100, "longitude": 100})
Expand Down Expand Up @@ -508,6 +503,11 @@ Notice that the 0-shaped sizes were not printed to screen. Since ``template`` ha
expected = ds + 10 + 10
mapped.identical(expected)
.. ipython:: python
:suppress:
ds.close() # Closes "example-data.nc".
os.remove("example-data.nc")
.. tip::

Expand Down
33 changes: 20 additions & 13 deletions doc/user-guide/io.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,8 @@ format (recommended).
.. ipython:: python
:suppress:
import os
import numpy as np
import pandas as pd
import xarray as xr
Expand Down Expand Up @@ -84,6 +86,13 @@ We can load netCDF files to create a new Dataset using
ds_disk = xr.open_dataset("saved_on_disk.nc")
ds_disk
.. ipython:: python
:suppress:
# Close "saved_on_disk.nc", but retain the file until after closing or deleting other
# datasets that will refer to it.
ds_disk.close()
Similarly, a DataArray can be saved to disk using the
:py:meth:`DataArray.to_netcdf` method, and loaded
from disk using the :py:func:`open_dataarray` function. As netCDF files
Expand Down Expand Up @@ -204,11 +213,6 @@ You can view this encoding information (among others) in the
Note that all operations that manipulate variables other than indexing
will remove encoding information.

.. ipython:: python
:suppress:
ds_disk.close()

.. _combining multiple files:

Expand Down Expand Up @@ -484,13 +488,13 @@ and currently raises a warning unless ``invalid_netcdf=True`` is set:
da.to_netcdf("complex.nc", engine="h5netcdf", invalid_netcdf=True)
# Reading it back
xr.open_dataarray("complex.nc", engine="h5netcdf")
reopened = xr.open_dataarray("complex.nc", engine="h5netcdf")
reopened
.. ipython:: python
:suppress:
import os
reopened.close()
os.remove("complex.nc")
.. warning::
Expand Down Expand Up @@ -989,16 +993,19 @@ To export just the dataset schema without the data itself, use the
ds.to_dict(data=False)
This can be useful for generating indices of dataset contents to expose to
search indices or other automated data discovery tools.

.. ipython:: python
:suppress:
import os
# We're now done with the dataset named `ds`. Although the `with` statement closed
# the dataset, displaying the unpickled pickle of `ds` re-opened "saved_on_disk.nc".
# However, `ds` (rather than the unpickled dataset) refers to the open file. Delete
# `ds` to close the file.
del ds
os.remove("saved_on_disk.nc")
This can be useful for generating indices of dataset contents to expose to
search indices or other automated data discovery tools.

.. _io.rasterio:

Rasterio
Expand Down
4 changes: 3 additions & 1 deletion doc/user-guide/weather-climate.rst
Original file line number Diff line number Diff line change
Expand Up @@ -218,13 +218,15 @@ For data indexed by a :py:class:`~xarray.CFTimeIndex` xarray currently supports:
.. ipython:: python
da.to_netcdf("example-no-leap.nc")
xr.open_dataset("example-no-leap.nc")
reopened = xr.open_dataset("example-no-leap.nc")
reopened
.. ipython:: python
:suppress:
import os
reopened.close()
os.remove("example-no-leap.nc")
- And resampling along the time dimension for data indexed by a :py:class:`~xarray.CFTimeIndex`:
Expand Down
3 changes: 3 additions & 0 deletions doc/whats-new.rst
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,9 @@ Bug fixes
Documentation
~~~~~~~~~~~~~

- Delete files of datasets saved to disk while building the documentation and enable
building on Windows via `sphinx-build` (:pull:`6237`).
By `Stan West <https://github.com/stanwest>`_.


Internal Changes
Expand Down

0 comments on commit 0f91f05

Please sign in to comment.