-
-
Notifications
You must be signed in to change notification settings - Fork 274
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Codec does not support buffers of > 2147483647 bytes - reason for this error message #487
Comments
Hi @mrava87, it is due to (1), the chunks will be too large for the compressor codec to handle. Some compressor codecs, like the default Blosc codec, have a maximum buffer size that they can accept during encoding. The error message originates from here, any suggestions for improving this would be welcome. It might also be possible to raise an exception earlier from within zarr at array creation time, at least for codecs that use the same convention of a class attribute named |
Thanks a lot @alimanfoo! That makes more sense now :) I think something like, ‘Consider reducing chunk size’ as part of the message could help understand where the problem is and how to solve it. And perhaps also adding the current size of arr.nbytes could be useful to understand how much smaller the chunks should be. I agree with the early exception raise if that does not require much change to current code base. I can try to make a proposal PR for both if you think makes |
On Mon, 21 Oct 2019 at 06:53, Matteo Ravasi ***@***.***> wrote:
Thanks a lot @alimanfoo <https://github.com/alimanfoo>! That makes more
sense now :)
I think something like, ‘Consider reducing chunk size’ as part of the
message could help understand where the problem is and how to solve it. And
perhaps also adding the current size of arr.nbytes could be useful to
understand how much smaller the chunks should be.
The slight complication here is that the error comes from within the
numcodecs package, which is most often used with zarr to compress chunks,
but is separate from zarr and can also be used in other applications. So if
we added a message like "Consider reducing chunk size" then that might not
make sense when numcodecs is being used elsewhere.
Perhaps a more general message, like "Consider reducing the size of the
buffer you are trying to encode."?
I agree with the early exception raise if that does not require much
change to current code base.
Cool. If we raise an early exception, that would come from within zarr, so
the error message could be very specific about chunks being too big for
the compressor codec.
I can try to make a proposal PR for both if you think makes
sense :)
PRs welcome :)
|
Hello,
I have problems understanding the reason why this error occurs:
z[start1:end1, start2:end2] = nparray
.If this is due to the size of chunks, could an error be returned at initialization and not when trying to write to a zarr file? If this is due to the second case, how would you suggest transferring multiple npz files to a zarr file - noting that as part of the transfer process the user may want to perform some manipulation of the input arrays prior to writing the manipulated arrays to zarr?
Either way, it would be great to have a more understandable error message (at least converting the number of bytes into human readable format would be a good idea).
Thank you!
The text was updated successfully, but these errors were encountered: