Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Ti.Blob#toArrayBuffer(), significant perf increase for Node.js Buffer shim #11810

Merged
merged 8 commits into from Aug 13, 2020

Conversation

sgtcoolguy
Copy link
Contributor

@sgtcoolguy sgtcoolguy commented Jul 8, 2020

Description:
This is a pretty mixed PR here, consisting of:

  • significant performance increases of our Node.js Buffer shim.
    • This is achieved by forking Buffer to two subclasses: FastBuffer (which acts like Node and effectively extends builtin Uint8Array); and SlowBuffer which wraps a Ti.Buffer and whose use should be solely limited to write use cases that we want to pass through to the underlying Ti.Buffer.
    • In using our image snapshot comparison tests, reading a 90x90 PNG from a buffer went from ~2500ms to ~5ms.
  • Supporting the conversion of Java byte[] as a return type and converting that to a JS ArrayBuffer holding a copy of the bytes on Android
  • Exposing a native Ti.Blob#toArrayBuffer() on iOS and Android

TODO

@build
Copy link
Contributor

build commented Jul 8, 2020

Fails
🚫 Tests have failed, see below for more information.
Warnings
⚠️ This PR has milestone set to 9.3.0, but the version defined in package.json is 9.2.0 Please either: - Update the milestone on the PR - Update the version in package.json - Hold the PR to be merged later after a release and version bump on this branch
Messages
📖

💾 Here's the generated SDK zipfile.

📖 ✊ The commits in this PR match our conventions! Feel free to Rebase and Merge this PR when ready.
📖 ❌ 1 tests have failed There are 1 tests failing and 705 skipped out of 8003 total tests.

Tests:

ClassnameNameTimeError
android.emulator.Titanium.Androidactivity callbacks (9)5.004
Error: timeout of 5000ms exceeded
at Titanium.<anonymous> (/ti-mocha.js:6535:53120)

Generated by 🚫 dangerJS against 4e11ea7

@hansemannn
Copy link
Collaborator

Controversial question: Does a single Titanium developer use these APIs?! I feel like the need for a stable liveview and modern native platform APIs is much more critical then these Node.js shims that mainly increase the app size.

@sgtcoolguy sgtcoolguy requested review from janvennemann and removed request for a team July 9, 2020 14:34
@sgtcoolguy
Copy link
Contributor Author

@hansemannn We use them internally for our test suite; for use of 3rd-party npm packages in some internal apps; and for some of the integration support for things like Jan's webpack integration. We've never really publicly announced or documented the Node.js shims yet, so I wouldn't anticipate external usage yet. The long-term goal here is to provide a large enough set of shims that we can announce and people can use a large subset of Node.js code/npm packages interchangeably on Titanium. It's not a small or short effort. Nor will it just magically come all at once.

Additionally, they help raise bugs and performance issues in our native APIs (i.e. Ti.Buffer is very, very slow if you actually read/write byte-by-byte, and this PR helps you use the Node shim to avoid that penalty in many cases: and also raises the potential to me about a way we could use direct memory sharing to mostly eliminate that performance toll)

As to "modern native platform APIs", what specifically does that mean? A wider set of the native frameworks exposed in modules/the SDK?

@hansemannn
Copy link
Collaborator

hansemannn commented Jul 13, 2020

Thank you Chris! tl;dr: You know how much I appreciate your work and effort that you put into every part of Titanium - no question! 🙃

The thing I was wondering is what would developers right now benefit from to see Titanium as a competing cross-platform SDK. Obviously, better Marketing across all board. But code-side, your excellent knowledge across all platforms could also for example help polishing up macOS support (#11279) which becomes a more and more fundamental part of iOS development. Also things like general parity issues between iOS/Android or (probably my personal #1): A modern liveview rewrite for 2020 hat can keep up with other frameworks, e.g. by providing hot-reloading / HMR (and with it: proper state management) that has been discussed for years but never made it.

These are the primary issues nearly every new, advanced and even senior Ti dev runs into every day. Shims are also helpful once everything around is stable and developer friendly but, unfortunately, those above ones influence the dev exp a lot right now and makes it hard to stay with Ti.

jsArray->Set(context, (uint32_t) i, Integer::New(isolate, arrayElements[i]));
}

return jsArray;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I looked at our "v8.h" header file. We should be able to use memcpy() to copy the bytes over which should be significantly faster. Especially for large arrays. Would you mind giving this a go please?

size_t byteCount = env->GetArrayLength(javaByteArray);
Local<Array> jsArray = Array::New(isolate, byteCount);
if (byteCount > 0) {
	void* sourcePointer = (void*)(env->GetByteArrayElements(javaByteArray, 0));
	void* destinationPointer = jsArray->GetContents().Data();
	memcpy(destinationPointer, sourcePointer, byteCount);
}
return jsArray;

An even better way of doing this is to "pin" the Java byte array into memory and wrap it with the V8 ArrayBuffer via the Array::New(Isolate*, void*, size_t) API... but... unfortunately... Android JNI does not guarantee it will pin it in memory and may make a copy instead. So, this idea is out. (Bummer.)
https://developer.android.com/training/articles/perf-jni#primitive-arrays

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this will work, because the GetContents().Data() is an ArrayBuffer API, not an Array API. So I could conceivably return an ArrayBuffer here instead of an Array. I alluded to it in the PR body, but basically the most efficient mechanism here is to make use of ByteBuffer in java and shared the underlying memory with the Uint8Array/TypedArray in the JS engine. But that'd require some changes to our Java APIs to use the ByteBuffer in place of passing a byte[] around.

I'll see if I can't just swap to have it return an ArrayBuffer under the hood. It'll require tweaks in other places, but that may be a better fit.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, so looks like the Web Blob actually has an API for this! https://developer.mozilla.org/en-US/docs/Web/API/Blob/arrayBuffer

They use method called arrayBuffer() that returns a Promise that resolves to an ArrayBuffer. Given that, it probably makes sense to try and align here. The obvious issue right now is our lack of native Promise APIs (see #10554)

So in the interim, perhaps a Ti.Blob.toArrayBuffer() which directly/synchronously returns an ArrayBuffer (rather than the initial PR proposal of toUint8Array() returning an Uint8Array sync); and then add a JS level shim to add the Ti.Blob.arrayBuffer() that returns a Promise and uses the sync toArrayBuffer() under the hood.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could still try to make use of pinned array elements?

size_t byteCount = env->GetArrayLength(javaByteArray);
if (byteCount == 0) {
  return ArrayBuffer::New(isolate, 0);
}

jboolean isCopy = JNI_FALSE;
void* src = (void*)(env->GetByteArrayElements(javaByteArray, &isCopy));

if (isCopy == JNI_FALSE) {
  return ArrayBuffer::New(isolate, src, byteCount);
}

Local<Array> jsArray = ArrayBuffer::New(isolate, byteCount);
void* dst = jsArray->GetContents().Data();
memcpy(dst, src, byteCount);
return jsArray;

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First, the code I posted is missing a call to ReleaseByteArrayElements(). This is becauseGetByteArrayElements() increments the reference count. So, we must call the release method to decrement it and avoid a memory leak.

If we were to take advantage of "pinning", then we shouldn't release the Java byte array reference until the V8 array object has been garbage collected.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's possible to use GetPrimitiveArrayCritical to guarantee pinned access. However, using this locks the GC while the array is being used.

It looks like ART implements a different behaviour, where GC is still possible but compacting GCs that would move the array are not.
https://android.googlesource.com/platform/art/+/refs/heads/master/runtime/jni/jni_internal.cc#2099
https://android.googlesource.com/platform/art/+/refs/heads/master/runtime/gc/heap.cc#2073

Similarly, GetStringCritical also exists.
https://android.googlesource.com/platform/art/+/refs/heads/master/runtime/jni/jni_internal.cc#1907

Maybe we could experiment with this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the correct long-term move here it to use ByteBuffer and https://docs.oracle.com/javase/7/docs/technotes/guides/jni/spec/functions.html#NewDirectByteBuffer

I opened https://jira.appcelerator.org/browse/TIMOB-28039 to track this.

Basically the underlying ArrayBuffer in the engine can give us a pointer to the memory and we can allocate a direct ByteBuffer to expose that to Java. On iOS, we can create an ArrayBuffer and have an NSData point at the same memory pool.

So for this PR, I'm really trying not to break our APIs but just do a copy of the byte[] to generate a JS ArrayBuffer as a quicker means of "dumping" bytes rather than traversing the bridge back and forth to read/write byte-by-byte. The current Ti.Buffer is written around a byte[] and getBuffer() is a "module" api.

But I definitely think we can use memory sharing long-term to underpin Ti.Buffer and avoid traversing the bridge at all for reads/writes and still have both the native and JS sides referring to the same bytes. But it is likely a larger change than I wanted to get into here, hence why I broke off a separate ticket. The code as-is makes some assumptions about the buffer being a thing wrapper around byte[] and moving to ByteBuffer is likely a fairly large change (perhaps breaking for modules).

@sgtcoolguy sgtcoolguy changed the title feat: Ti.Blob#toUint8Array(), significant perf increase for Node.js Buffer shim feat: Ti.Blob#toArrayBuffer(), significant perf increase for Node.js Buffer shim Jul 21, 2020

/**
* @param {integer[]|Buffer|string} value value we're wrapping
* @param {string|number} [encodingOrOffset]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • ⚠️ common/Resources/ti.internal/extensions/node/buffer.js line 102 – Missing JSDoc parameter description for 'encodingOrOffset'. (valid-jsdoc)

@sgtcoolguy
Copy link
Contributor Author

Can't update the apidoc until tidev/docs-devkit#41 gets merged, released and we update appcelerator/doctools and this repo's package.json to point at the new version of titanium-docgen.

/**
* @param {integer[]|Buffer|string} value value we're wrapping
* @param {string|number} [encodingOrOffset]
* @param {number} [length]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • ⚠️ common/Resources/ti.internal/extensions/node/buffer.js line 103 – Missing JSDoc parameter description for 'length'. (valid-jsdoc)

if (byteOffset === undefined) {
byteOffset = 0;
} else {
byteOffset = +byteOffset;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like a typo.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants