-
Notifications
You must be signed in to change notification settings - Fork 111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory consumption optimization #14
Comments
Hey Francois, I'm glad to hear it is working well for you! You are correct in your thinking about how the The automatic cold filter faulting is pretty simple. In the config it is possible to specify a value The way we setup our filters at Kiip, is they are named something like:
So we have filters like:
This way, as you suggested, eventually it is possible for the filters This is basically the same as what you suggested, so I expect that Let me know if you have any other questions. Best Regards, Armon Dadgar On Friday, April 5, 2013 at 3:36 AM, Francois-Guillaume Ribreau wrote:
|
Thanks for your feedback! I updated my code (with filters like .DD-MM-YYYY) and set I'll keep you posted if anything weird happens Cheers |
Great glad it worked! I would advise against just setting initial capacity to the smallest possible value. |
Hi,
I've been using bloomd in production since yesterday and I must say I'm impressed by its stability and low CPU consumption. You did a really good job there, congrats!
However, I've got some questions regarding memory consumption. Currently bloomd memory consumption is constantly increasing (RES: 106M, SHR: 105M, VIRT 243M).
Here are my bloom filters after one day.
Note that they are going to increase like that at nearly constant rate. And since Scalable Bloom Filters work by adding new bloom filters when the size ratio is reached, the memory consumption will indefinitely increase.
I'm not an expert in C, but I wondered if you could update the readme to give some input on how the "Automatically faults cold filters out of memory to save resources" feature works, in order to take advantage of it.
If I understand it well, since here my filters won't be ever cold (new data is added constantly), I thought maybe I could create filters with composed name like "f{filterid}{weekoftheyear}{year}" where "{weekoftheyear}{year}" are informations extracted and available from every data that the filters test against. That way, filters with older data could be removed from memory but still available just in case.
Is this the right approach? What do you think?
The text was updated successfully, but these errors were encountered: