Skip to content
This repository has been archived by the owner on Sep 1, 2022. It is now read-only.

Fixed multiple issues regarding very large, filtered HDF5 data chunks #533

Merged
merged 2 commits into from
Apr 15, 2016
Merged

Conversation

cwardgar
Copy link
Contributor

  • Ensure that no more than MAX_ARRAY_LEN bytes are allocated for decompressing a chunk. Avoids "Negative initial size" overflow error and fixes KXL-349288.
  • Detect when chunk is too big for Java (but valid according to HDF5 spec) and throw an exception with a useful message.
  • Catch OutOfMemoryErrors resulting from large chunks and rethrow with a useful message.

@lesserwhirls
Copy link
Collaborator

Rebase bump @cwardgar

cwardgar added 2 commits April 15, 2016 16:20
* Ensure that no more than MAX_ARRAY_LEN bytes are allocated for decompressing a chunk. Avoids "Negative initial size" overflow error and fixes KXL-349288.
* Detect when chunk is too big for Java (but valid according to HDF5 spec) and throw an exception with a useful message.
* Catch OutOfMemoryErrors resulting from large chunks and rethrow with a useful message.
@lesserwhirls lesserwhirls merged commit 69c0a9d into Unidata:master Apr 15, 2016
@cwardgar cwardgar deleted the h5-oom branch June 22, 2016 22:29
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants