You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, I think any time the user triggers a citation pattern, the package re-reads and parses the entire set of bibfiles. If your bibliography happens to be large (~10,000 records), it takes several seconds just to show the autocomplete bar. During that time, the system is entirely unresponsive. And because data doesn't persist, every time a user with a large bibfile triggers a reference autocompletion it grinds the program to a halt.
I think a way around this might be to do an initial read of the bibfiles into a lightweight database like SQLite. Then, for any subsequent lookup we check whether the database is out of date. If it's not, we do the lookup right on the database, which should be very fast. And on the off chance the user has modified a bibfile during their session, then and only then would we need to rebuild the database.
@Focus Does this idea sound good to you? If so, I'm happy to start working on a pull request. Might take me a bit as I'm new to SQLite, but I think the performance benefits will make it orders of magnitude faster for citation lookups.
The text was updated successfully, but these errors were encountered:
I have thought about something like this. The big problem is that you have to watch the file as someone might add things using an other editor/software etc. If you think you can handle these problems then by all means give it a go.
@Focus I think we can use the npm package chokidar to watch any bibfiles for changes. And then if there's a change, we asynchronousky rebuild the SQLite database.
Currently, I think any time the user triggers a citation pattern, the package re-reads and parses the entire set of bibfiles. If your bibliography happens to be large (~10,000 records), it takes several seconds just to show the autocomplete bar. During that time, the system is entirely unresponsive. And because data doesn't persist, every time a user with a large bibfile triggers a reference autocompletion it grinds the program to a halt.
I think a way around this might be to do an initial read of the bibfiles into a lightweight database like SQLite. Then, for any subsequent lookup we check whether the database is out of date. If it's not, we do the lookup right on the database, which should be very fast. And on the off chance the user has modified a bibfile during their session, then and only then would we need to rebuild the database.
@Focus Does this idea sound good to you? If so, I'm happy to start working on a pull request. Might take me a bit as I'm new to SQLite, but I think the performance benefits will make it orders of magnitude faster for citation lookups.
The text was updated successfully, but these errors were encountered: