You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
TL; DR: grpc-web uses goog.crypt.base64.encodeByteArray to encode the payload. This function can only encode at the throughput of ~16MB/s. js-base64 package is 50x faster, so rewriting encodeByteArray using ideas from js-base64 (or other fast base64 package) is greatly appreciated.
Background: we have a grpc-web-based app that performs bulk data upload, and it can sustain only about ~15MB/s regardless of the underlying network speed. The reason is that the base64 encoder used by grpc-web (goog.crypt.base64) is quite slow. I did a quick benchmark between goog.crypt.base64.encodeByteArray vs Base64.fromUint8Array (from https://www.npmjs.com/package/js-base64), and the former is about 50x slower than js-base64. E.g., encoding a random 16MB Uint8Array payload takes about ~1000ms with goog.crypt.base64, 19ms with js-base64.
FYI, here's a node script that I used.
require('google-closure-library');
goog.require('goog.crypt.base64');
const {Base64} = require('js-base64');
// Create a Uint8Array with random contents.
function randomArray(len) {
const a = new Uint8Array(len);
for (let i = 0; i < len; i += 1) {
a[i] = Math.ceil((Math.random()) * 256);
}
return a;
}
function runBench(label, len, encode, decode) {
// Run a encode/decode roundtrip once and verify their behavior.
const bin = randomArray(len);
const str = encode(bin) // base64 string
const bin1 = decode(str); // should be the same as bin
if (bin1.length != bin.length) {
throw Error(`wrong length: got ${bin1.length} want ${bin.length}`);
}
for (let i = 0; i < bin.length; i += 1) {
if (bin[i] != bin1[i]) {
throw Error(`wrong data at ${i}: got ${bin1[i]} want ${bin[i]}`);
}
}
// Run a given function in a loop and measure a mean runtime.
const run = (cb) => {
const startTime = new Date();
let rep = 0;
for (; ;) {
for (let i = 0; i < 10; i += 1) {
cb();
rep += 1;
}
const elapsed = new Date() - startTime;
if (elapsed > 1000) {
return elapsed / rep;
}
}
};
const encodeTime = run(() => encode(bin));
console.log(`${label} encode: len=${len}: ${encodeTime}ms`);
const decodeTime = run(() => decode(str));
console.log(`${label} decode: len=${len}: ${decodeTime}ms`);
}
const lens = [8 << 10, 1 << 20, 16 << 20];
for (i in lens) {
runBench("jsbase64", lens[i],
Base64.fromUint8Array,
Base64.toUint8Array);
runBench("googcrypt", lens[i],
goog.crypt.base64.encodeByteArray,
goog.crypt.base64.decodeStringToUint8Array);
}
I just tested this. Indeed, there's a 50x speed difference in NodeJS, but on the browser, it's much more modest - only a factor of 2 or so. This is because js-base64 "cheats" by using Buffer to call directly into the native C++ base64 implementation. This is of course impossible in the browser, and since Closure Library only officially supports the browser use case, it's unlikely we'd look to add any special optimizations just for node.
Does grpc-web have a use case in the browser that's still seeing poor performance?
(reposted from grpc/grpc-web#1187)
TL; DR: grpc-web uses goog.crypt.base64.encodeByteArray to encode the payload. This function can only encode at the throughput of ~16MB/s. js-base64 package is 50x faster, so rewriting encodeByteArray using ideas from js-base64 (or other fast base64 package) is greatly appreciated.
Background: we have a grpc-web-based app that performs bulk data upload, and it can sustain only about ~15MB/s regardless of the underlying network speed. The reason is that the base64 encoder used by grpc-web (goog.crypt.base64) is quite slow. I did a quick benchmark between goog.crypt.base64.encodeByteArray vs Base64.fromUint8Array (from https://www.npmjs.com/package/js-base64), and the former is about 50x slower than js-base64. E.g., encoding a random 16MB Uint8Array payload takes about ~1000ms with goog.crypt.base64, 19ms with js-base64.
FYI, here's a node script that I used.
result:
The text was updated successfully, but these errors were encountered: