Skip to content


Subversion checkout URL

You can clone with
Download ZIP


Buffer repeated messages #87

helix opened this Issue · 8 comments

5 participants


It would be nice to have some kind of buffer which group same messages together with counting.

For example, on a production webserver, if the homepage generates an error, and the mail handler is being used. The operation team would receive as many emails as requests.

It should be nice to receive only 1 mail of the same error every N occurrences the error.

It is different from the BufferHandler because of the BufferHandler actually outputs every message log, even if they're identical, and this results in a HUGE mail, too much noise. I only need the relevant information once, not repeated N times.


related to #34


This isn't a trivial problem because those messages span multiple requests, so one needs to filter those across requests.


I second helix opinion.

Jordi, what do you think about the idea of some sort of "cache", with a key related to message, and value consisting on a count of occurrences, and the datetime when message was written first?

Such cachefile could be loaded at service startup and updated upon "cache expiration", accordingly to config settings (IE write to this handler only after N hours have passed since last written event). It would prevent much logging, and especially overwhelming situations (lots of file descriptors open, many e-mail connection attempts, ...) on high traffic websites.

What do you think?


couldn't this work, like the FingersCrossedHandler works?


@staabm No, because the goal is to avoid repeating alert emails for separate web requests.

@Seldaek I also think using a tool like LogStash or LogEntries is a better way to achieve it than implementing this in Monolog


@Seldaek Good points indeed. What I like about Monolog is being flexible, so that it can be used both in dev and production env. Hor scaling helps w/ load, so that having a local logger is not a concern (for now). And there are no additional tools to add; it can be used both on complex (= high traffic) architectures, and small websites. That helps a lot internally, with maintenance.

Don't know if you'd consider my idea sane enough, though. ;-) I was thinking about having tuples MSG_ID => (time,msg,severity) with MSG_ID being a key generated from monolog_channel_id + handler_id + message_content, with the aim of being unique. Then, within abstractHandler there'd be a ignore_within_mins variable. If null (default value) it'd work as it is now. Otherwise it would load proper key from "cache" (json, sqlite, redis, memcache, ...) and evaluate time last same message was sent against ignore_within_mins variable. If not matched, process message. Else return.

The idea is to have a setIgnoreCache method, accepting the threshold (minutes) and a cacheWriterHanddler, which could deal like a key/value reader (through a simple getKey/putKey interface) with the aforementioned caching systems. So, if a cacheWriteHandler is set within an handler, the aforementioned is taken into account, before message is considered. Otherwise it is not, and thinkgs continue to work as they do now. What do you think?

@stof Thanks for the hint about logstash. Didn't know it, I will definitely give it a look!


@maraspin sounds more or less ok from what I understood. I'd be happy to have a PSR Cache interface before doing this though.

@Seldaek Seldaek added the Feature label
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.