You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Im just getting going writing an MPI accessor into xarray and I want to open netcdf files using h5netcdf and an MPI enabled h5py package. I can open a netcdf file with h5netcdf no problem using
But when I pass the communicator through to the xarray driver_kwds for the h5netcdf_.py's open_dataset, it fails because the communicator is not hashable.
srun -n 256 python read_xarray.py
Traceback (most recent call last):
File "read_xarray.py", line 7, in<module>
ds = xr.open_dataset('mydata.nc', engine='h5netcdf', format='NETCDF4', driver='mpio', driver_kwds={"comm":world})
File "site-packages/xarray/backends/api.py", line 571, in open_dataset
backend_ds = backend.open_dataset(
File "site-packages/xarray/backends/h5netcdf_.py", line 405, in open_dataset
store = H5NetCDFStore.open(
File "site-packages/xarray/backends/h5netcdf_.py", line 184, in open
manager = CachingFileManager(h5netcdf.File, filename, mode=mode, kwargs=kwargs)
File "site-packages/xarray/backends/file_manager.py", line 148, in __init__
self._key = self._make_key()
File "site-packages/xarray/backends/file_manager.py", line 167, in _make_key
return _HashedSequence(value)
File "site-packages/xarray/backends/file_manager.py", line 333, in __init__
self.hashvalue = hash(tuple_value)
TypeError: unhashable type: 'mpi4py.MPI.Intracomm'
quick script to create a netcdf using h5netcdf, run on a single core.
importh5netcdfimportnumpyasnpwithh5netcdf.File("mydata.nc", "w") asf:
# set dimensions with a dictionaryf.dimensions= {"x": 5}
# and update them with a dict-like interface# f.dimensions['x'] = 5# f.dimensions.update({'x': 5})v=f.create_variable("variable", ("x",), float)
v[:] =np.ones(5)
What did you expect to happen?
File opens on cores across nodes.
Minimal Complete Verifiable Example
MVCE confirmation
Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
Complete example — the example is self-contained, including all data and the text of any traceback.
Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
New issue — a search of GitHub Issues suggests this is not a duplicate.
Recent environment — the issue occurs with the latest version of xarray and its dependencies.
Relevant log output
Anything else we need to know?
I've tried Dask_mpi, but I want to be able to leverage fast MPI communications on the backend by chunking out-of-memory across nodes. I also want to be able to do out-of-memory writes using mpio on the write side using h5netcdf. Ive successfully done this with large scale rasters, but its not an xarray accessor. Getting this using xarray nomenclature as much as possible would be awesome.
Im hoping these efforts will alleviate your large scale memory/time issues with methods like resample (along time e.g.) and other spatio-temporal operations.
importxarrayasxrfrommpi4pyimportMPIworld=MPI.COMM_WORLD# Open the file using xarrayds=xr.open_dataset('mydata.nc', engine='h5netcdf', format='NETCDF4', driver='mpio', driver_kwds={"comm":world})
print(f"{world.rank=}{ds['variable'].values}")
on 256 cores split across 3 nodes I get correct output
What happened?
Im just getting going writing an MPI accessor into xarray and I want to open netcdf files using h5netcdf and an MPI enabled h5py package. I can open a netcdf file with h5netcdf no problem using
But when I pass the communicator through to the xarray driver_kwds for the h5netcdf_.py's open_dataset, it fails because the communicator is not hashable.
quick script to create a netcdf using h5netcdf, run on a single core.
What did you expect to happen?
File opens on cores across nodes.
Minimal Complete Verifiable Example
MVCE confirmation
Relevant log output
Anything else we need to know?
I've tried Dask_mpi, but I want to be able to leverage fast MPI communications on the backend by chunking out-of-memory across nodes. I also want to be able to do out-of-memory writes using mpio on the write side using h5netcdf. Ive successfully done this with large scale rasters, but its not an xarray accessor. Getting this using xarray nomenclature as much as possible would be awesome.
Im hoping these efforts will alleviate your large scale memory/time issues with methods like resample (along time e.g.) and other spatio-temporal operations.
Environment
INSTALLED VERSIONS
commit: None
python: 3.10.10 (main, Apr 14 2023, 19:33:04) [GCC 10.3.1 20210422 (Red Hat 10.3.1-1)]
python-bits: 64
OS: Linux
OS-release: 4.18.0-425.3.1.el8.x86_64
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: ('en_US', 'UTF-8')
libhdf5: 1.12.2
libnetcdf: None
xarray: 2025.4.0
pandas: 2.2.3
numpy: 2.2.5
scipy: 1.15.3
netCDF4: None
pydap: None
h5netcdf: 1.6.1
h5py: 3.13.0
zarr: None
cftime: None
nc_time_axis: None
iris: None
bottleneck: None
dask: 2025.5.0
distributed: 2025.5.0
matplotlib: 3.9.0
cartopy: None
seaborn: None
numbagg: None
fsspec: 2023.4.0
cupy: None
pint: None
sparse: None
flox: None
numpy_groupies: None
setuptools: 65.5.0
pip: 25.1.1
conda: None
pytest: 7.3.1
mypy: None
IPython: None
sphinx: 8.1.3
The text was updated successfully, but these errors were encountered: