Skip to content


Add the ability to disable Key RE-mapping when a server is marked as dead #108

andyberryman opened this Issue · 4 comments

3 participants


In my usage pattern, having the Keys get re-mapped whenever a server is marked as dead is extremely bad. The reason is that I have a very distributed application with many processes from many servers. So whenever I have a network issue that results in some, not all, of my servers being able to communicate with one or more of the cache servers, I end up having a situation where different processes have different states for the "live" server pool. As such, my cache gets all messed up with keys existing on multiple servers with different value because of updates. So I'd rather just force a cache miss or failure than to have the keys get re-mapped.


If anyone out there is reading these, I think that after spending some more time looking at this, all that I need to do is implement my own Node Locator object that ignores the dead servers. And in fact what seems to be easiest is to change the DefaultNodeLocator object from being a sealed class to being open and then implementing from that class and just overriding the Locate(string key) method. Thoughts?


Just curious: what did you end up doing on this?


I implemented my own Node Locator within the Enyim Source code and forced it to be used rather than the default one. And basically I just made it return a null if the server was in the deadpool rather than allowing the re-map. Kind of a royal pain though because it means that I'll have to try and maintain this if I need to upgrade versions. I havent been very happy overall with this driver from an enterprise level usage pattern. But I'm making do.


I'm very interested in this effort as well. I tried implementing my own node locator (see the linked Gist) and verified that my locator was being called but I was not seeing resulting behavior where all the traffic was going to the first node in my list (which is what I have it coded to do).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.