Skip to content

Conversation

@jeanouii
Copy link
Contributor

No description provided.

@cshannon
Copy link
Contributor

Can you mark PRs as "draft" for work in progress stuff? It will make it easier to know when it's ready for review. Also, KahaDB already has a max journal length setting so I was wondering if that could be used may it may not be able but it has been a while since I looked. I think if a record is bigger than the max length I believe it will just write the entire value and be larger than the configured max. So we may need to enforce a max or something. Something else is the journal file length max size can be changed between restarts so you could have different size files.

@cshannon cshannon marked this pull request as draft November 28, 2025 19:20
@jeanouii
Copy link
Contributor Author

jeanouii commented Dec 1, 2025

@cshannon sorry about that. Sure thing I'll be more diligent.
I discovered this while debugging a test randomly failing with out of memory.

Yes, KahaDB already has a max length and the idea is to use it to cap the record length. If the file is corrupted in a way that the record length is very large, we may blow up the memory. The idea here is to cap the record size to the file max size.

If your assumption is correct then my fix won't work, and it will fail even though the record was accurately written to disk in a bigger file. We need a cap on the record size in my opinion. But I'm not sure what's the best at the minute.

@mattrpav
Copy link
Contributor

mattrpav commented Dec 1, 2025

I think a multiplier on the 'journalMaxFileLength' would work.

edit: Yeah, the fact the the journalMaxFileLength can be changed between restarts is a challenge.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants