-
-
Notifications
You must be signed in to change notification settings - Fork 9.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cast array size to int64 when loading from archive #7598
Conversation
I think it should be Would it be possible to add a test? I guess this will be a somewhat heavyweight test since it has to write/read several gibibytes of data, but that's okay if you use the |
7df0f25
to
fad6296
Compare
Didn't know about |
@njsmith |
@charris: oh, great point |
@drasmuss Needs a bit more work. |
What do you think the behaviour should be in that case? I would lean towards just trying to open the file, and then failing with the normal out of memory error. We could also raise a custom error here though, if that seems like it wouldn't be informative enough. |
Either way works for me. IIRC (I checked yesterday), |
Prevents overflow errors for large arrays on systems where the default int type is int32.
Alright, changed it back to |
Thanks @drasmuss . |
When loading an array from a
npz
archive, the size of the array is computed vianumpy.multiply.reduce(shape)
. This defaults toint32
on some systems, including 64bit systems where it is possible to create arrays large enough to cause that value to overflow. Here is a minimal example illustrating the problem:output on my system:
I think the solution is just to change it to
numpy.multiply.reduce(shape, dtype=numpy.int64)