Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Saving 170k+ entries crashes process #274

Open
eL1x00r opened this issue Feb 12, 2024 · 2 comments
Open

Saving 170k+ entries crashes process #274

eL1x00r opened this issue Feb 12, 2024 · 2 comments

Comments

@eL1x00r
Copy link

eL1x00r commented Feb 12, 2024

Any idea on how to increase performance for bigger scale?

@marcus-pousette
Copy link
Member

Hello!

I have planned to fix this issue coming 2 weeks. But basically this issue is about bounding RAM usage. I am working on a video on-demand solution with Peerbit and I am starting to see similiar issues when videos are long.

Two things to fix

  1. Basically (I am assuming you are using the Document store). Is that the index is now in memory, but needs to be on disc when there are many documents. One "easy" solution is the change out the index "engine" to SQLLite with OPFS storage and that problem would be solved.

  2. For the syncing engine, the hashes of all "heads" are kept in memory. This will eventually also create a problem that is similar to (1) and needs to be solved in a similar way as (1). The RAM usage due to this is most likely lower than for the document index.

@eL1x00r
Copy link
Author

eL1x00r commented Feb 13, 2024

fire, keep me updated!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants