New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ValueError: memoryview is too large (dask.array.histogram) #11046
Comments
I strong recommend to not pass the computed array I suspect that the bug you are running into is actually already fixed in dask/distributed#8507 but you wouldn't have a good time submitting 12TB from your client to the scheduler. |
Unfortunately that functions requires a dask.array and a numpy.array, otherwise it would of course be nicer to not do that. if isinstance(Y, da.Array):
raise TypeError("`Y` must be a numpy array") If I batch the materialized array into 100k slices (which reduces the graph size) it works, so you're probably right! hists = []
batch_size = 100000
for batch in tqdm(range(darr.shape[0] // batch_size)):
distances = pairwise_distances(
darr,
darr[
batch * batch_size : min((batch + 1) * batch_size, darr.shape[0])
].compute(),
metric="cosine",
)
hist, bins = da.histogram(distances, bins=100, range=[0, 2])
hists.append(hist)
da.compute(hists) # works, still computes everything at once Do I have the patch if I install from source? |
Sorry, I missed that. I haven't tried to understand your batching code to ensure if it is correct. If it is, maybe you want to contribute this to dask-ml because a "proper" dask algorithm works similarly. I don't know enough about the pairwise_distance algorithm to tell However, what I can tell you is that if you include a 12TB array in the
I just checked and this was already released in 2024.2.1 (the version you are running on). By breaking up the array you are avoiding all sorts of problems so if this is possible, go for it. |
No worries, I just like to leave code snippets in case anyone has the same issue, so they're not faced the unhelpful "nvm I solved it". I can open a PR at some point and discuss this over there.
For d = da.arange(5, chunks=2)
e = da.arange(5, chunks=2)
f = da.map_blocks(lambda a, b: a + b**2, d, e)
f.compute()
We need bigger graphs!! /s (but maybe actually) |
Describe the issue:
I'm trying to compute a histogram over a 12 TB array of pairwise distances and it fails.
Returns either
ValueError: memoryview is too large
or just cancels
Minimal Complete Verifiable Example:
Anything else we need to know?:
Just computing the histogram of such a large matrix works
Environment:
The text was updated successfully, but these errors were encountered: