Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Our implementation of baseEncode is slow, consumes 25% of the time of TransactionFrame::apply #273

Closed
gmarceau opened this issue Mar 18, 2015 · 4 comments · Fixed by #619

Comments

@gmarceau
Copy link

No description provided.

@gmarceau gmarceau changed the title Our implementation of baseEncode is slow, consumes 25% of the time of TransactionFrame::apply Our implementation of baseEncode is slow, consumes 25% of the time of TransactionFrame::apply Mar 18, 2015
@graydon
Copy link
Contributor

graydon commented Mar 19, 2015

It's possible we can make this faster -- happy to try! -- but my sense is that base58 is just a remarkably slow (imo: terrible) encoding, because of the need to effectively do bignum division for each digit. The fact that we're doing such division ourselves rather than using a bignum library is a matter of avoiding allocation and a library dependency, at the cost of losing fancy bignum-library optimized divide operations. Unlike base64 or base[any-other-power-of-two] one can't really split the digits up and only operate on a small amount of the input at a time. Hence quadratic time :(

I suggested before that we drop it from the internal format, store as hexenc or base64 in the db. Another option is (as with the crypto verify operations) to just make an in-memory LRU cache. It's a pure (in the mathematical sense) operation so relatively easy/harmless to cache. Will only help if the operations are repetitive though.

@gmarceau
Copy link
Author

In the release build, baseEncode drops to 5% of the total run time, so it's less important than I first thought. From the usage pattern I'm seeing, the operations /are/ repeative: create an an account id, then load that account, then write to the account, and so forth, with each step encoding or decoding the same string. It does seem the cache is worth trying first.

@jedmccaleb jedmccaleb modified the milestone: Feature Freeze Mar 22, 2015
@graydon
Copy link
Contributor

graydon commented Apr 7, 2015

If we do choose to go down the road of adding a few pure-function caches at the crypto level, this looks like a pleasant candidate to pull in-tree:

https://github.com/lamerman/cpp-lru-cache

@jedmccaleb jedmccaleb modified the milestones: Feature Freeze, Production May 7, 2015
@jedmccaleb jedmccaleb modified the milestones: Production, Switchover - Q3 May 29, 2015
@MonsieurNicolas
Copy link
Contributor

from my observation, switching to binhex gives a 15% total speed improvement (requests per second).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants