-
Notifications
You must be signed in to change notification settings - Fork 652
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache subscribers list to improve performance #1320
Comments
another perspective. for many people the list of subscribers is effectively static an known at startup time. so a cache forever would be preferable for these people |
When using RavenDB the server/client infrastructure caches things for us by default, and we can use, for a specific query, the aggressive caching feature. I also like Kijana suggestion to use the out-of-the-box push capability of RavenDB using the Changes API. |
I'm also a fan of using the default caching of the persistency layer. NH has different caching capabilities and as mentioned also ravendb. Don't implement it yourself. Am 19.07.2013 um 07:40 schrieb mauroservienti notifications@github.com:
|
Guys, there is nothing preventing us from using the native caches or more efficient native APIs. For example, to modify RavenDB to use the new "Changes API" all you would need is change the current implementation. |
How about adding a optional "push enabled storage" interface as well so storages that supports it can bypass the polling? Sent from my iPhone On 19 jul 2013, at 07:53, John Simons notifications@github.com wrote:
|
Do we really need it? As John said the infrastructure will only call GetSubscriberAddressesForMessagehttps://github.com/NServiceBus/NServiceBus/blob/master/src/NServiceBus.Core/Unicast/Subscriptions/MessageDrivenSubscriptions/ISubscriptionStorage.cs#L31 so is an internal matter of the subscription component to decide how to cache/refresh/poll/donothing. Or am I missing something? .m From: Andreas Öhlund [mailto:notifications@github.com] How about adding a optional "push enabled storage" interface as well so storages that supports it can bypass the polling? Sent from my iPhone On 19 jul 2013, at 07:53, John Simons <notifications@github.commailto:notifications@github.com> wrote:
— |
Let's please not add more marker interfaces! |
The only issue is that we end-up having 2 caches! On 19 July 2013 17:47, mauroservienti notifications@github.com wrote:
Regards |
I was talking conceptually:) Sent from my iPhone On 19 jul 2013, at 09:47, John Simons notifications@github.com wrote:
|
This was my main concern. Also the delay. So even if you use Raven push you would still have to wait for the poller, feels like a suboptimal solution to me Sent from my iPhone On 19 jul 2013, at 09:50, John Simons notifications@github.com wrote:
|
Agree 😄 |
If I may: For a production environment adding subscribers and/or publishers is a non-trivial affair (as with most things in production). So the subscriber list should be 100% consistent across the various publishers. Polling introduces some blocking overhead for a couple of milliseconds every so often along with a slight window of inconsistency (low probability). So (manual or automated)
This really shouldn't happen often and the small overhead in ensuring 100% consistency is worth caching forever. The subscription management implementation used should be responsible for the caching (however it chooses to do so). |
Is there a need for polling at all? Unless the list of subscribers is updated outside of the NSB framework, couldn't the cached list of subscribers be retrieved once (at startup) and then updated on each subscribe/unsubscribe message? This would get a little more complex when using the master node/distributor model ... but should work if the master node writes to the subscriber storage and forwards subscriptions to each worker node (for local caching) |
There could be other reason for sharing storage but without the distributor On Mon, Dec 30, 2013 at 11:48 AM, chrisbednarski
|
Performance tests show that we're quite good (924 msg/s with the msmq + inmemory and 724 with Raven) This is with a local raven , and we haven't tested sql yet. |
With the NH caching in v5 and the perf of the others being good I'd say we can close this one? |
Background
At the moment we are querying the storage every time we publish a message to retrieve the list of subscribers.
But this list does not actually change that much frequently once the endpoint is started.
So it is a good candidate for some good old caching 😉
Proposed Solution
So maybe we pool every 5 secs for the first minute and then change it to use the set polling interval.
The text was updated successfully, but these errors were encountered: