Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible race condition when appending to an existing zarr #8876

Closed
5 tasks done
rsemlal-murmuration opened this issue Mar 25, 2024 · 4 comments · Fixed by #8459
Closed
5 tasks done

Possible race condition when appending to an existing zarr #8876

rsemlal-murmuration opened this issue Mar 25, 2024 · 4 comments · Fixed by #8459
Labels

Comments

@rsemlal-murmuration
Copy link

What happened?

When appending to an existing zarr along a dimension (to_zarr(..., mode='a', append_dim="x" ,..)), if the dask chunking of the dataset to append does not align with the chunking of the existing zarr, the resulting consolidated zarr store may have NaNs instead of the actual values it is supposed to have.

What did you expect to happen?

We would expected that zarr append to have the same behaviour as if we concatenate dataset in memory (using concat) and write the whole result on a new zarr store in one go

Minimal Complete Verifiable Example

from distributed import Client, LocalCluster
import xarray as xr
import tempfile

ds1 = xr.Dataset({"a": ("x", [1., 1.])}, coords={'x': [1, 2]}).chunk({"x": 3})
ds2 = xr.Dataset({"a": ("x", [1., 1., 1., 1.])}, coords={'x': [3, 4, 5, 6]}).chunk({"x": 3})

with Client(LocalCluster(processes=False, n_workers=1, threads_per_worker=2)): # The issue happens only when: threads_per_worker > 1
    for i in range(0, 100):
        with tempfile.TemporaryDirectory() as store:
            print(store)
            ds1.to_zarr(store, mode="w") # write first dataset
            ds2.to_zarr(store, mode="a", append_dim="x") # append first dataset

            rez = xr.open_zarr(store).compute() # open consolidated dataset
            nb_values = rez.a.count().item(0) # count non NaN values
            if nb_values != 6:
                print("found NaNs:")
                print(rez.to_dataframe())
                break

MVCE confirmation

  • Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
  • Complete example — the example is self-contained, including all data and the text of any traceback.
  • Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
  • New issue — a search of GitHub Issues suggests this is not a duplicate.
  • Recent environment — the issue occurs with the latest version of xarray and its dependencies.

Relevant log output

/tmp/tmptg_pe6ox
/tmp/tmpm7ncmuxd
/tmp/tmpiqcgoiw2
/tmp/tmppma1ieo7
/tmp/tmpw5vi4cf0
/tmp/tmp1rmgwju0
/tmp/tmpm6tfswzi
found NaNs:
     a
x     
1  1.0
2  1.0
3  1.0
4  1.0
5  1.0
6  NaN

Anything else we need to know?

The example code snippet provided here, reproduces the issue.

Since the issue occurs randomly, we loop in the example for a few times and stop when the issue occurs.

In the example, when ds1 is first written, since it only contains 2 values along the x dimension, the resulting .zarr store have the chunking: {'x': 2}, even though we called .chunk({"x": 3}).

Side note: This behaviour in itself is not problematic in this case, but the fact that the chunking is silently changed made this issue harder to spot.

However, when we try to append the second dataset ds2, that contains 4 values, the .chunk({"x": 3}) in the begining splits the dask array into 2 dask chunks, but in a way that does not align with zarr chunks.

Zarr chunks:

  • chunk1 : x: [1; 2]
  • chunk2 : x: [3; 4]
  • chunk3 : x: [5; 6]

Dask chunks for ds2:

  • chunk A: x: [3; 4; 5]
  • chunk B: x: [6]

Both dask chunks A and B, are supposed to write on zarr chunk3
And depending on who writes first, we can end up with NaN on x = 5 or x = 6 instead of actual values.

The issue obviously happens only when dask tasks are run in parallel.
Using safe_chunks = True when calling to_zarr does not seem to help.

We couldn't figure out from the documentation how to detect this kind of issues, and how to prevent them from happening (maybe using a synchronizer?)

Environment

INSTALLED VERSIONS

commit: None
python: 3.11.0rc1 (main, Aug 12 2022, 10:02:14) [GCC 11.2.0]
python-bits: 64
OS: Linux
OS-release: 5.15.133.1-microsoft-standard-WSL2
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: C.UTF-8
LOCALE: ('en_US', 'UTF-8')
libhdf5: 1.12.2
libnetcdf: 4.9.3-development

xarray: 2024.2.0
pandas: 2.2.1
numpy: 1.26.4
scipy: 1.12.0
netCDF4: 1.6.5
pydap: None
h5netcdf: 1.3.0
h5py: 3.10.0
Nio: None
zarr: 2.17.1
cftime: 1.6.3
nc_time_axis: 1.4.1
iris: None
bottleneck: 1.3.8
dask: 2024.3.1
distributed: 2024.3.1
matplotlib: 3.8.3
cartopy: None
seaborn: 0.13.2
numbagg: 0.8.1
fsspec: 2024.3.1
cupy: None
pint: None
sparse: None
flox: 0.9.5
numpy_groupies: 0.10.2
setuptools: 69.2.0
pip: 24.0
conda: None
pytest: 8.1.1
mypy: None
IPython: 8.22.2
sphinx: None

@rsemlal-murmuration rsemlal-murmuration added bug needs triage Issue that has not been reviewed by xarray team member labels Mar 25, 2024
Copy link

welcome bot commented Mar 25, 2024

Thanks for opening your first issue here at xarray! Be sure to follow the issue template!
If you have an idea for a solution, we would really welcome a Pull Request with proposed changes.
See the Contributing Guide for more.
It may take us a while to respond here, but we really value your contribution. Contributors like you help make xarray better.
Thank you!

@rsemlal-murmuration
Copy link
Author

Ok so if I understand well, with the merged pull request, the code snippet will raise an error instead of silently writing NaNs, is that right @dcherian ?

@dcherian
Copy link
Contributor

dcherian commented Apr 2, 2024

Yes, it should raise an error. Can you verify please?

@rsemlal-murmuration
Copy link
Author

The last version of xarray indeed raises an error now in our internal data pipelines, unless we add safe_chunks=False.
Thanks for that !

We will internally try to find a "clean" implementation to still be able to append to our existing zarr dataset without running into this.
And if we think it's relevant to other people, I may open a new issue/PR to make xarray docs point out a recommended implementation for this use case.

@dcherian dcherian removed the needs triage Issue that has not been reviewed by xarray team member label Apr 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants