-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
API proposal : pooled / leased connections as a secondary API #886
Comments
Usage idea:
Or maybe just one-step that?
with:
|
For clarity: is the concern mainly client (pipeline) blocking operations due to large load, or things that are meant to be server-blocking (atomic), the latter of which are going to block due to the nature of redis anyway? I'm assuming all the wins are in the former, in which case: ...maybe? But I see less utility for it. That being said, most of our direct concerns in this area (around connections) at Stack fall info:
I'm not sure how it'd help the second case, but a similar approach as our "bulky" could be natively handled by the former. But, counter to that use case, is it worth automatic control and generation of bulk connections over our explicit usage today? I'm leaning towards "it's not", due to increased complexity and uncertain expectations vs. today. A single "bulky" on-demand (in addition to infrastructure and subscription) connections could satisfy that, but it's such a narrow use case...again, worth it? I think the biggest issue I have is this changing from a very predictable 2 connections to ...I'm not sure I helped here, but that's current thoughts after a first pass. |
@NickCraver the motivation isn't data volume / load related; it is semantic related. There are a good few operations that are designed to be connection-blocking, so we don't expose them today. Similarly, there are some |
My opinion that this API is unnecessary. PS: yes, I can still shoot myself into leg by passing IDatabaseAsync from "special multiplexer" into usual code, but that is ok for me. |
@mgravell I'm not sure I understand the intent of where you're picturing the leasing/blocking. It's a close in concept to a transaction (IMO), or (the way I read it) we'd be blocking every command on the pipe behind that |
|
@NickCraver the leased connection here is a dedicated socket - it is a leased redis connection, essentially; "interactive" only (no subscription functionality); we would hand it back to some notional pool when disposed, etc (so from a redis perspective: it stays alive) |
Okay so if I do this: db.StringSet("mykey", "value");
using(ILeasedDatabase db = await server.LeaseAsync(timeout))
{
var val = await db.StringGet("mykey");
} ...then Thoughts on those kinds of things? |
It's not a "user way of think", I'm writing a similar package for another database. And I think that if user want a separate connection from time to time he can cache that separate connection somewhere near main connection. And other concern: |
No, there is no conflict there;
but.... at that point you've gone out of your way to introduce a problem (@NickCraver ) |
Ah sorry I should amend the example (was on mobile), I don't think the I'm just not (yet) convinced we've gotten enough feedback wanting this to justify spending much time on it, but I don't feel strongly against either. FWIW, some of the feedback like My main concern is still around any sort of automatic socket pooling which makes things less predictable. That would feel (at this particular moment) like a step backwards. Right now failures there are failures on that multiplexer and it's not too hard to reason about what went wrong. When 1 (or many) in a pool of several start having issues, things get messy to figure out...that's the part that scares me, given we just got sockets decently stable and are still working on hangs (which to be fair may be unrelated). To be fair, it's entirely possible after we're stable on v2 I'd feel a lot more confident in diving into this with a different perspective on the complexity/debug risk vs. payoff. |
Fair enough; totally agree that it isn't a priority today - happy to shelve indefinitely, I just want to open the dialogue. |
Just wanted to chime in as to why having a separate connection, even for non-blocking operation use, can make sense some times. At the moment the recommendation is to use a single, shared ConnectionMultiplexer across all threads. This usually works well, but I've observed these side effects in production:
Imagine for example, Thread A has written several megabytes worth of SET commands due to a background task, and Thread B has a simple GET command pending at the end of the shared muxer queue. Thread B must wait for Thread A's workload to be emptied first. The delays I am referring to here are milliseconds, and in one case 100s of milliseconds, due to CPU heavy redis commands used (eg sorted set intersects), but I'm mostly pointing it out in terms of "jitter" observed for random requests.
If both are sent at the same moment, and my assumption is the local app server's TCP framework will also do interleaving, then Redis will switch/interleave processing requests from the TCP sockets between both connections, so while partially receiving/processing Thread A's workload (the multi-megabyte SET commands), Redis will process Thread B's simple workload quickly, then switch back to continuing to process Thread A workload. The end result here is that Thread B gets a faster response time as its connection's workload was interleaved, on Redis's side, because it was a separate TCP connection, and it wasn't behind the workload of Thread A by sharing the same TCP socket or local C# command queue. |
Indeed. This is a scenario where we currently just spin up a second
parallel multiplexer - one for large slow ops, one for fast traffic. Just
mentioning that because it is a simple pragmatic low effort solution to the
same problems.
…On Thu, 26 Jul 2018, 01:23 Andrew Armstrong, ***@***.***> wrote:
Just wanted to chime in as to why having a separate connection, even for
non-blocking operation use, can make sense some times.
At the moment the recommendation is to use a single, shared
ConnectionMultiplexer across all threads.
This usually works well, but I've observed these side effects in
production:
1. Because all commands from all threads go to a single C# queue and
TCP socket, you get "head of line blocking" behind a buffer that may be
filled with more commands from Thread A (perhaps megabytes in terms of
bandwidth to process) in front of the one you're wanting to transmit that
is more time sensitive, and smaller in size, from Thread B.
Imagine for example, Thread A has written several megabytes worth of SET
commands due to a background task, and Thread B has a simple GET command
pending at the end of the shared muxer queue. Thread B must wait for Thread
A's workload to be emptied first.
The delays I am referring to here are milliseconds, and in one case 100s
of milliseconds, due to CPU heavy redis commands used (eg sorted set
intersects), but I'm mostly pointing it out in terms of "jitter" observed
for random requests.
1.
I acknowledge that Redis is single threaded, and processes requests
sequentially, but it is worth noting that Redis attempts to treat each
client/TCP connection fairly [1].
2.
Imagine we utilized two connections ("clients") to Redis instead, and
Thread A is doing its bandwidth-heavy, batch based processing on one
connection, and Thread B uses another connection that is intended to be for
more real-time request/response patterns.
If both are sent at the same moment, and my assumption is the local app
server's TCP framework will also do interleaving, then Redis will
switch/interleave processing requests from the TCP sockets between both
connections, so while partially receiving/processing Thread A's workload
(the multi-megabyte SET commands), Redis will process Thread B's simple
workload quickly, then switch back to continuing to process Thread A
workload.
The end result here is that Thread B gets a faster response time as its
connection's workload was interleaved, on Redis's side, because it was a
separate TCP connection, and it wasn't behind the workload of Thread A by
sharing the same TCP socket or local C# command queue.
[1] https://redis.io/topics/clients
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#886 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AABDsP40_AnNeiNYdk0NNb30SfS6m5uYks5uKQwdgaJpZM4VKHPu>
.
|
Is this still on the table? Is there anything I can help out with? |
"yes" and "probably not yet, later: definitely", in that order.
…On Sat, 15 Feb 2020, 20:52 Tristan Pratt, ***@***.***> wrote:
Is this still on the table? Is there anything I can help out with?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#886?email_source=notifications&email_token=AAAEHMFK6J56UOHHX2BGYHLRDBIZVA5CNFSM4FJIOPXKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEL3WWQA#issuecomment-586640192>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAEHMAUUJDRGO73Z43KI6DRDBIZVANCNFSM4FJIOPXA>
.
|
Any updates? |
Motivations:
BRPOPLPUSH
, etc)WATCH
/MULTI
/EXEC
Execute
; this would hopefully stop people doing dangerous things on the main connectionsCounter-motivations:
WATCH
/MULTI
/EXEC
Note:
If implemented (needs consideration), the multiplexer would take care of the socket IO, but they would be stored separately and would not participate in the multiplexed command stream. This would be in addition to the regular API and would just work inside the existing context.
Questions:
using
?IServer
? and if so, do we need a newGetServer
mechanism that makes this painless?Timescales:
It won't be 2.0
Likelihood:
Thinking about it; it is tempting.
The text was updated successfully, but these errors were encountered: