You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey @twmb I'm happy to share a little bit about our thinking. While you could take that approach, we felt it important for go-nsq to enable high volume message processing. At higher volumes the churn of per-message goroutines becomes a significant overhead and bottleneck so we avoid that implementation approach both on the go-nsq library and on nsqd itself.
If a client wants per-message goroutines for lower volume handling it's always possible to add that (I included one possible approach below), but it wouldn't be possible to undo per-message goroutines if go-nsq added that.
There is also a limiting concurrency factor in how you configure max-inflight which sets the max possible messages you could see at a time. If you want effectively individually goroutined message handling you can set the number of concurrent handlers to match your max-inflight setting (since you'll never need more goroutines than that). It's been my personal experience that having strong controls on concurrency is helpful in a distributed system so you can appropriately limit capacity to downstream databases.
consumer.AddHandler(nsq.HandlerFunc(func(m *Message) error {
m.DisableAutoResponse()
go yourHandler(m)
return nil
}))
I didn't realize that a variable number of goroutines had worse performance than having a constant amount of goroutines all polling from a channel. I'm all for performance. 👍
AddConcurrentHandlers starts X handlers concurrently and has those X constantly all check one channel if there is a message to handle.
It seems better to have one loop checking a channel that that, if a message exists, grabs one spot from a semaphore and fires off a
go handleMessage
.The text was updated successfully, but these errors were encountered: