Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cache number parser to not create too many instances #75

Merged
merged 2 commits into from
Apr 4, 2024

Conversation

jecisc
Copy link
Member

@jecisc jecisc commented Apr 4, 2024

This change caches the number parser. During the import of a model the number parser represented 5% en the objects created. With this change we only create 1.

…to parse and save some memory.

On the Lucene model we allocate more than 2.000.000 number parsers.
@jecisc jecisc merged commit 311c421 into development Apr 4, 2024
2 checks passed
@jecisc jecisc deleted the numberParser branch April 4, 2024 12:16
@guillep
Copy link
Contributor

guillep commented Apr 4, 2024

Did you see some performance improvement?

@jecisc
Copy link
Member Author

jecisc commented Apr 4, 2024

With Seba's profiler I see that we reduce by almost 10% the allocated size in memory during an import of a medium size model.

There is a little gain of speed but it's not huge because this represent just a small part of the import but also because we tweak the GC parameters to have a big young space (64MB instead of 2.8MB).

I guess that without this change of parameter we would have a bigger difference.

I was thinking to propose to reuse the same Parser in Number/Integer>>#readFrom:* methods but that would cause problems in case of concurrency since the class NumberParser has states.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants