Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

possible memory leak #830

Closed
lukeSky3434 opened this issue Aug 12, 2023 · 1 comment
Closed

possible memory leak #830

lukeSky3434 opened this issue Aug 12, 2023 · 1 comment

Comments

@lukeSky3434
Copy link

Hello,

I am using the nitrite db to store and delete objects, I am also doing a lot of find operations on it. To speed up the find operations I added an index field as well. I am not using it as an in-memory db. Is the Index in memory or also on disc?

The reason why I am asking is that the Eclipse MAT shows me the following thing:

One instance of org.h2.mvstore.MVStore loaded by jdk.internal.loader.ClassLoaders$AppClassLoader @ 0x80100000 occupies 31,479,256 (39.85%) bytes. The memory is accumulated in one instance of org.h2.mvstore.cache.CacheLongKeyLIRS$Segment[], loaded by jdk.internal.loader.ClassLoaders$AppClassLoader @ 0x80100000, which occupies 30,309,896 (38.37%) bytes.

@anidotnet
Copy link
Contributor

anidotnet commented Aug 12, 2023

If you are using persistent disk based storage then index are also on disk.

The message what you are seeing is from H2 MVStore, (the underlying storage engine of nitrite), it uses LIRS cache. Here is the relevant section from it's documentation.

Concurrent reads and writes are supported. All such read operations can occur in parallel. Concurrent reads from the page cache, as well as concurrent reads from the file system are supported. Write operations first read the relevant pages from disk to memory (this can happen concurrently), and only then modify the data. The in-memory parts of write operations are synchronized. Writing changes to the file can occur concurrently to modifying the data, as writing operates on a snapshot.

Caching is done on the page level. The page cache is a concurrent LIRS cache, which should be resistant against scan operations.

For fully scalable concurrent write operations to a map (in-memory and to disk), the map could be split into multiple maps in different stores ('sharding'). The plan is to add such a mechanism later when needed.

You can find the full documentation here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants