Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Queue Length #45

Closed
chrismoos opened this issue Jan 18, 2012 · 6 comments
Closed

Queue Length #45

chrismoos opened this issue Jan 18, 2012 · 6 comments

Comments

@chrismoos
Copy link

Hi,

If I have a lot of events fnordmetric can't keep up and the queue length keeps growing until eventually Redis is claiming all memory on the box. (queue length of 8 million). I'm not sure if the queue is the only reason to blame for Redis taking up so much memory, but maybe also data stored on the various metrics? (I'm not too sure of how everything is stored).

5.4G Redis, here is last log from fnordmetric:

events_received: 25300317, events_processed: 17397513, queue_length: 7901866

@kazjote
Copy link
Contributor

kazjote commented Jan 19, 2012

All events are stored as separate values in Redis (am I right?). It is quite possible that you are just out of space.

You could try decreasing :event_data_ttl parameter as shown here: https://github.com/paulasmuth/fnordmetric/blob/master/doc/full_example.rb#L635 . By default it is set to 1 month. But it will only take effect for newly added events.

@chrismoos
Copy link
Author

The TTL isn't a bad idea but I run out of memory within hours, so I don't think its going to help.

@kazjote
Copy link
Contributor

kazjote commented Jan 19, 2012

You can experiment and set it to 5 minutes :) Do you use uniq gauges and sessions?

@JohnMurray
Copy link
Contributor

Have you tried running multiple instances? If you run multiple instances on the same machine (even with the same configuration), it will notice a web-server interface running and just start up another worker process.

I'm working on a commit now that will allow Fnordmetric to spawn off multiple workers via multiple processes.

@catz
Copy link

catz commented Mar 4, 2012

you should definitely run more workers and decrease event_data_ttl parameter. I run several workers via self.embedded (thin) monitored by god. My queue is always 0 now. It's about 2000 events for 3 seconds flow now.

imo 8 millions unprocessed events - totally unreal case

@chrismoos
Copy link
Author

Yeah seems like more workers and TTL is the way to go, going to try this out. Thanks everyone.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants