-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixes int overflow when requesting large slices #389
Conversation
This reverts commit 73a63aa.
@jasonb5 @muryanto1 absolute champs, I'll take For a test, I am not sure how we can get around having a huge (>8Gb) dataset to read? |
@jasonb5 @muryanto1 @durack1 conda create -n nightly -c cdat/label/nightly -c conda-forge cdat python=3.6 I still got zero values when loading a large size array. My array size is (56623106, 137), which is 7,757,365,522. This should be within the "unsigned long" limit. Did I get the right nightly version? |
@hyma68 Please do: |
@muryanto1 |
@hyma68 If you need any other cdat packages, you can do: |
@muryanto1, if it's not complicated to implement, could this fix be rolled out for all nightlies? It would create more work for you guys to address continuing bug reports when turning on broader nightly builds could solve this long-standing issue |
Fix for #383
The large slice size (300, 60, 384, 320) was causing the variable holding the total number of elements to overflow. The variable was type
int
, whose max value is 2,147,483,647 and the request slice was 2,211,840,000, causing the overflow. I've changed the type tounsigned long
which i dont forsee anyone reading that much data into memory anytime soon.@muryanto1 How would you like to tackle writing a test for this?