-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Avoid traversing entire arrays when extracting shape from objects in java #24833
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Avoid traversing entire arrays when extracting shape from objects in java #24833
Conversation
…xt dimension is equal to the shape's length
|
It's far better to just not supply a multidimensional array when constructing a tensor. Java multidimensional arrays have unfixable performance problems if you want to use them as tensors. Supplying direct (or non-direct) byte buffers should be faster in all cases. I'll take a look through, |
| int nextDim = curDim + 1; | ||
| // Avoid traversing the entire array (autoboxing its values) when the next dimension is equal | ||
| // to the shape's length | ||
| if (shape.length != nextDim) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now you've hoisted the check from line 495 up outside the for loop, you should remove the check at line 495 as it will never fail.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done!
|
I was using FloatBuffer.allocate to supply the array, but for some reason, memory was leaking. Also, using FloatBuffer.allocate creates a HeapBuffer under the hood, which gets copied to a new direct byte buffer. This leads to the creation of DirectByteBuffers, and when direct memory gets full, a major GC is triggered. I still need to try using a DirectBuffer pool managed by our application to avoid major GCs and byte buffer copies, though. Using the method that takes an object as an argument fixed the leak I was experiencing, but the auto-boxing and iteration add some overhead as well. |
|
Yeah, it needs a direct memory allocation somewhere in the JVM or we have to malloc in ORT, copy the data out of the JVM then write to the new memory. If your inputs are of a known maximum size then preallocate direct buffers and reuse them. It shouldn't leak, though it might look like that till the GC clears the buffers and I'm not sure how quickly the newer collectors like ZGC do that. |
|
Inputs might not be always the same size because there are A/B experiments using different models that might have different input sizes. Im not sure why they were leaking since the JVM has Xmx36g and MaxDirectMemorySize=2g and the app runs out of memory (64gb) anyways after a few days. We're using the v1.18.0. Although, Im gonna give it a shot using one big buffer and slicing into smaller buffers to supply them to OnnxTensors, instead of using heap buffers! |
|
Hi @Craigacp! How are you doing? who else needs to review/approve this PR? |
|
While I maintain the Java API, I don't work at MS so I can't do the final approval. You need to agree to the CLA before anyone from MS will look at it, and I'd missed that that hadn't happened yet. |
|
@fedebonisconti please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.
Contributor License AgreementContribution License AgreementThis Contribution License Agreement (“Agreement”) is agreed to by the party signing below (“You”),
|
Description
Avoids calling
TensorInfo#extractShaperecursively whencurDim + 1 == shape.length.Thus enhancing the performance by avoiding traversing the entire array to return on the last DFS iteration, as well as unnecessary object autoboxing and method calls.
Motivation and Context
Our Java rest API makes predictions using 100 floats for each tensor, and processing hundreds of tensors for each request, resulting in creating hundreds of thousands of
OnnxTensorobjects being created per second.We noticed while profiling the app that about 35% of the cpu sampling was spent in
TensorInfo#extractShapemethod, particularly in theArrays.get(obj, i)method call, which is called for each element in the array.Also, the
Arrays.get(obj, i)method returns anObject, makingfloats (or any other native type) get autoboxed toFloatobjects, and they were subject to garbage collection.A benchmark using a
float[1][100]:Benchmark code
Disclaimer: I wasn't able to build onnx locally to run all the test suite for inference in my M3 because im having issues with some dependencies, but added tests for
TensorInfo.constructFromJavaArray(obj)and they pass formainbranch as well.