The browser encoder uses character length for strings, but the browser decoder, node encoder, and node decoder all expect byte lengths. This leads to hilarity at decode time.
Yes indeed. It's just hard to get byte lengths on client side in a performant manner. I'll probably make a UTF happy fork sometime by using (new Blob([str])).size instead of str.length sometime, but I don't want binarypack to be slow
I would agree that performance is important, but this malfunctions in a particularly egregious way when UTF8 data is passed, which is to say that it's quite vulnerable, as UTF8 is ever-present on the modern web. Notwithstanding this bug, binaryjs is an excellent library btw.
I see what you're saying.
Any suggestion for solutions?
Or other ideas?