New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Java byte array max size check before copying long-sized data from jni bridge #3849
Comments
I agree that the error should be as clear as possible, but I wonder if if is not more appropriate to throw a |
thanks @adamretter, I can change PR to use My thinking was inspired by java public class Foo {
public static void main(String[] args) {
byte[] array = new byte[Integer.MAX_VALUE];
}
} $ javac Foo.java ; java Foo
Exception in thread "main" java.lang.OutOfMemoryError: Requested array size exceeds VM limit
at Foo.main(Foo.java:3) I used |
@azagrebin Okay I can understand your thinking. A couple of further comments:
|
@adamretter |
Closed by #3850 |
…ni (facebook#3850) Summary: to address issue facebook#3849 Closes facebook#3850 Differential Revision: D8695487 Pulled By: sagar0 fbshipit-source-id: 04baeb2127663934ed1321fe6d9a9ec23c86e16b
This issue partially addresses #2383.
When user merges a lot of data into one value, its size might grow indefinitely.
When user gets the value as byte[] in java client, its actual merged size might exceed the JVM limitation for returned java array size.
Currently size of java array is calculated just by explicit cast of underlying size_t of value:
rocksdb/java/rocksjni/portal.h
Line 4297 in 66c7aa3
The problem is that if
bytes.size()
exceeds java 32b int, it leads to overflow ofjlen
which might be even negative. The result can be an unclearNegativeArraySizeException
/ArrayIndexOutOfBoundsException
at the user side.The suggestion is to check the size overflow and throw more clear RocksDBException with problem description:
The text was updated successfully, but these errors were encountered: