-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added Sentinel support #46
Conversation
@@ -110,12 +110,49 @@ If a set of listeners is provided to the Brando actor when it is created , it wi | |||
|
|||
Currently, the possible messages sent to each listener include the following: | |||
|
|||
* `Connecting`: When a creating a TCP connection. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When a creating ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo, I'll fix that !
ca20b76
to
c860fde
Compare
c860fde
to
0b84ffb
Compare
listeners: Set[ActorRef], | ||
connectionTimeout: FiniteDuration, | ||
connectionRetryDelay: FiniteDuration, | ||
connectionHeartbeatDelay: Option[FiniteDuration]) extends Brando("", 0, database, auth, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It feels like host and port are not necessary for BrandoSentinel. It should not be in the super class Brando.
6ffc01e
to
2fdb314
Compare
Based on the suggestions I had, I added ConnectionSupervisor.scala to capture the common logic of Brando and BrandoSentinel. Let me know if you prefer that rather than the solution here : |
396e91d
to
ec4daa5
Compare
Connection is now in a separate file Config is pulled from the companion objects Apply methods now takes null as default instead of None
ec4daa5
to
ff74707
Compare
} | ||
|
||
describe("when disconnected") { | ||
it("should recreate a connection using sentinel") { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible to create a test using SLAVEOF
and watch it reconnect other Redis instance?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a great idea. I'll see if I can force a failover in the tests
SENTINEL failover Force a failover as if the master was not reachable, and without asking for agreement to other Sentinels (however a new version of the configuration will be published so that the other Sentinels will update their configurations).
fe04ea8
to
0cc5879
Compare
d42f833
to
3260433
Compare
Added pub/sub test for SentinelClient Renamed Brando -> Redis ; BrandoSentinel -> RedisSentinel
3260433
to
2413a0e
Compare
|
||
def disconnected: Receive = handleListeners orElse { | ||
case request: Request ⇒ | ||
stash() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be good to eliminate the stashing logic from the disconnected state. If a request is made right after a disconnection from Redis and is then transmitted upon successful reconnection (or failover to another server via Sentinel), then there are two possible outcomes:
- The code calling the Redis client is still waiting for a reply and it comes back in time (good)
- The code calling the Redis client is no longer waiting for a reply thanks to a timeout and has possibly taken recovery actions (complicated)
If the recovery actions that the caller takes involves attempting to make further Redis requests, then the client could fill up with many stashed messages that will be sent all at once to the server it next connects to. In the best case this is a waste of resources, as the calling code will have already continued operating as though its last N requests had entirely failed (likely generating more requests that the client will queue up). In the worst case this could lead to invalid data being written to Redis keys - i.e. where shared keys operated upon by other programs have been read and updated from other hosts which have not suffered a Redis disconnection.
Since programs which use this Redis client can't assume that timed out writes haven't been written (in the case where the request made it to the Redis server but the response was too slow) and also can't assume that timed out writes will be written, they have to be written to safely handle both of these cases. Stashing Redis commands for retransmission during disconnections could make reasoning about this very complicated, as the stashed message queue can grow to an arbitrary size. If the stashed messages were queued for a limited time (proportional to the time the user of the Redis client would wait for a response) then stashing the messages would likely not leave any extra room for surprising behaviour. But it would be even simpler to strip out the stashing behaviour entirely, since a program that is correctly using this client will have to be able to reason about the state of Redis assuming that timed out commands may or may not have been carried out and recover appropriately in either case.
I wasn't considering dropping the stashing as part of this CL but I'll be more than happy to do it. This should be handled by the application not the library. |
Big milestone! |
Dropping stash should be documented. It took me a few hours to find why Brando behaves that much different than in version 2.0.2 I used so far. I was not expecting this. I think that it would be nice to propose how to quickly fix this new behavior without significant application rewrite. Thanks! |
Added SentinelClient
Added BrandoSentinel
Connection is now in a separate file
Config is pulled from the companion objects