-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Document size limit to be increased to 16Mb? #25
Comments
Hello @sherry-ummen! It's easy to change this limit from 1Mb to 16Mb, just need change here: https://github.com/mbdavid/LiteDB/blob/master/LiteDB/Document/BsonDocument.cs#L14 But... The question is: why your document are so big? I think 1Mb a realllllly big document, and I always try keep under 100Kb.
Take a look on MongoDB documents about data modeling. All mongodb data modeling concepts are valid to LiteDB: |
Thanks Mauricio. Yes the document is text and its big. Basically some graphical objects related data. And its legacy code which generates big objects. So its very difficult to change the behavior. Why is it slow? Is it the serializer which is slow? We are currently using mongodb and if the doc size is more than 16MB then we store it as Blob. But we want embedded database. And Litedb suits best. Sent from my Windows Phone From: Mauricio Davidmailto:notifications@github.com Hello @sherry-ummen! It's easy to change this limit from 1Mb to 16Mb, just need change here: https://github.com/mbdavid/LiteDB/blob/master/LiteDB/Document/BsonDocument.cs#L14 But... The question is: why your document are so big? I think 1Mb a realllllly big document, and I always try keep under 100Kb.
Take a look on MongoDB documents about data modeling. All mongodb data modeling concepts are valid to LiteDB: Reply to this email directly or view it on GitHub: |
There is no problem to serialize/deserialize big documents, LiteDB uses FileStorage (as MongoDB GridFS) works as a splitter content in separate documents. To store big files, LiteDB split content in 1MB chunks and store one at time. After each chunk, LiteDB clear cache to avoid use too many memory. |
Ok so reading from all data pages is the problem? Then that should not be an issue in case of SSD ?. And will memory mapped i/o will help? Sent from my Windows Phone From: Mauricio Davidmailto:notifications@github.com There is no problem to serialize/deserialize big documents, LiteDB uses FileStorage (as MongoDB GridFS) works as a splitter content in separate documents. To store big files, LiteDB split content in 1MB chunks and store one at time. After each chunk, LiteDB clear cache to avoid use too many memory. Reply to this email directly or view it on GitHub: |
You will not avoid read all pages if your document is big and you need read all. To better performance, SSD disk are great and RAM memory too. Read documents with 16Mb is not a big issue if you read one or two documents each time. If you have many, I recommend to "close" and "re-open" database ( I have plans (it´s on my todo-list) to implement a better cache service that auto clear non-used cache pages, so it´s avoid to close/open database. |
|
Hello,
Is it possible to increase the document size limit from 1Mb to 16Mb like how mongodb has? If not then what is the complication?
Thanks
The text was updated successfully, but these errors were encountered: