Skip to content

Cannot open netcdf file using engine h5netcdf with an MPI communicator #10328

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
5 tasks done
leonfoks opened this issue May 16, 2025 · 2 comments
Open
5 tasks done
Labels

Comments

@leonfoks
Copy link

What happened?

Im just getting going writing an MPI accessor into xarray and I want to open netcdf files using h5netcdf and an MPI enabled h5py package. I can open a netcdf file with h5netcdf no problem using

import h5netcdf
from mpi4py import MPI
world = MPI.COMM_WORLD
with h5netcdf.File('mydata.nc', 'r', driver='mpio', comm=world) as f:
     print(f"{world.rank=} {f['variable']}")

But when I pass the communicator through to the xarray driver_kwds for the h5netcdf_.py's open_dataset, it fails because the communicator is not hashable.

import xarray as xr
ds = xr.open_dataset('mydata.nc', engine='h5netcdf', format='NETCDF4', driver='mpio', driver_kwds={"comm":MPI.COMM_WORLD})
srun -n 256 python read_xarray.py 
Traceback (most recent call last):
  File "read_xarray.py", line 7, in <module>
    ds = xr.open_dataset('mydata.nc', engine='h5netcdf', format='NETCDF4', driver='mpio', driver_kwds={"comm":world})
  File "site-packages/xarray/backends/api.py", line 571, in open_dataset
    backend_ds = backend.open_dataset(
  File "site-packages/xarray/backends/h5netcdf_.py", line 405, in open_dataset
    store = H5NetCDFStore.open(
  File "site-packages/xarray/backends/h5netcdf_.py", line 184, in open
    manager = CachingFileManager(h5netcdf.File, filename, mode=mode, kwargs=kwargs)
  File "site-packages/xarray/backends/file_manager.py", line 148, in __init__
    self._key = self._make_key()
  File "site-packages/xarray/backends/file_manager.py", line 167, in _make_key
    return _HashedSequence(value)
  File "site-packages/xarray/backends/file_manager.py", line 333, in __init__
    self.hashvalue = hash(tuple_value)
TypeError: unhashable type: 'mpi4py.MPI.Intracomm'

quick script to create a netcdf using h5netcdf, run on a single core.

import h5netcdf
import numpy as np

with h5netcdf.File("mydata.nc", "w") as f:
    # set dimensions with a dictionary
    f.dimensions = {"x": 5}
    # and update them with a dict-like interface
    # f.dimensions['x'] = 5
    # f.dimensions.update({'x': 5})

    v = f.create_variable("variable", ("x",), float)
    v[:] = np.ones(5)

What did you expect to happen?

File opens on cores across nodes.

Minimal Complete Verifiable Example

MVCE confirmation

  • Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
  • Complete example — the example is self-contained, including all data and the text of any traceback.
  • Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
  • New issue — a search of GitHub Issues suggests this is not a duplicate.
  • Recent environment — the issue occurs with the latest version of xarray and its dependencies.

Relevant log output

Anything else we need to know?

I've tried Dask_mpi, but I want to be able to leverage fast MPI communications on the backend by chunking out-of-memory across nodes. I also want to be able to do out-of-memory writes using mpio on the write side using h5netcdf. Ive successfully done this with large scale rasters, but its not an xarray accessor. Getting this using xarray nomenclature as much as possible would be awesome.

Im hoping these efforts will alleviate your large scale memory/time issues with methods like resample (along time e.g.) and other spatio-temporal operations.

Environment

INSTALLED VERSIONS

commit: None
python: 3.10.10 (main, Apr 14 2023, 19:33:04) [GCC 10.3.1 20210422 (Red Hat 10.3.1-1)]
python-bits: 64
OS: Linux
OS-release: 4.18.0-425.3.1.el8.x86_64
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: ('en_US', 'UTF-8')
libhdf5: 1.12.2
libnetcdf: None

xarray: 2025.4.0
pandas: 2.2.3
numpy: 2.2.5
scipy: 1.15.3
netCDF4: None
pydap: None
h5netcdf: 1.6.1
h5py: 3.13.0
zarr: None
cftime: None
nc_time_axis: None
iris: None
bottleneck: None
dask: 2025.5.0
distributed: 2025.5.0
matplotlib: 3.9.0
cartopy: None
seaborn: None
numbagg: None
fsspec: 2023.4.0
cupy: None
pint: None
sparse: None
flox: None
numpy_groupies: None
setuptools: 65.5.0
pip: 25.1.1
conda: None
pytest: 7.3.1
mypy: None
IPython: None
sphinx: 8.1.3

@leonfoks leonfoks added bug needs triage Issue that has not been reviewed by xarray team member labels May 16, 2025
@dcherian
Copy link
Contributor

I guess we could just use the id of any unhashable value in there, or ask upstream to add a __hash__ method on their object.

@dcherian dcherian removed the needs triage Issue that has not been reviewed by xarray team member label May 16, 2025
@leonfoks
Copy link
Author

Okay, obviously this is not a 100% fix, and I don't know what any knock-on effects are, but I just replaced hash with id in file filemanager.py

  def __init__(self, tuple_value):
      self[:] = tuple_value
      self.hashvalue = id(tuple_value)

and it worked just fine with

import xarray as xr
from mpi4py import MPI

world = MPI.COMM_WORLD

# Open the file using xarray
ds = xr.open_dataset('mydata.nc', engine='h5netcdf', format='NETCDF4', driver='mpio', driver_kwds={"comm":world})

print(f"{world.rank=} {ds['variable'].values}")

on 256 cores split across 3 nodes I get correct output

...
world.rank=200 [1. 1. 1. 1. 1.]
world.rank=221 [1. 1. 1. 1. 1.]
world.rank=243 [1. 1. 1. 1. 1.]
world.rank=137 [1. 1. 1. 1. 1.]
world.rank=168 [1. 1. 1. 1. 1.]
...

I'm going to start developing with this change locally until theres a proper implementation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants