-
Notifications
You must be signed in to change notification settings - Fork 963
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Our implementation of baseEncode
is slow, consumes 25% of the time of TransactionFrame::apply
#273
Comments
TransactionFrame::apply
baseEncode
is slow, consumes 25% of the time of TransactionFrame::apply
It's possible we can make this faster -- happy to try! -- but my sense is that base58 is just a remarkably slow (imo: terrible) encoding, because of the need to effectively do bignum division for each digit. The fact that we're doing such division ourselves rather than using a bignum library is a matter of avoiding allocation and a library dependency, at the cost of losing fancy bignum-library optimized divide operations. Unlike base64 or base[any-other-power-of-two] one can't really split the digits up and only operate on a small amount of the input at a time. Hence quadratic time :( I suggested before that we drop it from the internal format, store as hexenc or base64 in the db. Another option is (as with the crypto verify operations) to just make an in-memory LRU cache. It's a pure (in the mathematical sense) operation so relatively easy/harmless to cache. Will only help if the operations are repetitive though. |
In the release build, |
If we do choose to go down the road of adding a few pure-function caches at the crypto level, this looks like a pleasant candidate to pull in-tree: |
from my observation, switching to binhex gives a 15% total speed improvement (requests per second). |
No description provided.
The text was updated successfully, but these errors were encountered: