New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Async subscription handlers #861
Comments
Why does this need to be in the client library at all? |
[Edited to include source links] If I wasn't clear ... You can't go all the way through using async. The reason for this can be seen in a simple example. (_,x) => Thread.Sleep(5000) if this went all the way back this would actually pause the code that is processing the socket. This is why as of 3.5.0 all callbacks were dispatched through the threadpool (to isolated code in the client from client handlers). and is the furthest back you could push the async without either risking handlers breaking logic in the subscription or handlers breaking logic in the connection itself. As you can't push the async any further back than the dispatch to threadpool immediately before your handler anyways is there much point in having it? As you point out its quite reasonable in the handler to just call AsyncPump.Invoke(() => Handler(message)). What would be the gain of moving this into the client? |
For me it was more a convenience thing, as more and more libs move towards async APIs it feels clumsy to have sync handler only. Not necessarily about perf. I see the reasoning about the socket, and why callbacks are dispatched on the threadpool. And similar model could be achieved by using Task.Run with an async callback lambda. Don't get me wrong I'm not arguing for anything here. I wanted to point out the "drawbacks" of the current API in terms of usability when users call async APIs and offer a PR.
|
You could use a Task.Run but that would only bring it back to the catchupsubscription (one call back). |
I also find it a bit inefficient that all options to handle events are to block on a handler in either way. And I think that making EventAppeared delegate a function that returns a task makes a very good sense. I have worked around it in Persistent subscription by setting bufferSize to 1 to preserve an order and used async void method with manual acks not to block on task. I have briefly examined the code and believe TreadPool.QueueWorkItem can be safely replaced With Task.Run. And the rest of event processing logic can use async/await without breaking the logic. Processing will still happen on a ThreadPool threads. ProcessLiveQueue will also be async and return Task. And all the methods involved in working with event can be async if needed. I'm not sure yet if EventStoreCatchUpSubscription is meant to processed events in parallel, but if it is approach with Task.Run and async handlers will help utilize resources more efficiently without need to block thread. Also could be an improvement to replace ConcurrentQueue with BufferBlock to await for new events to appear. |
I just ran into the scenario where an event handler contains some async operations today and found this guidance in Google Groups on the subject. From the discussion here and there it's not clear what the options are and benefits and trade offs of these options. I would like more coherent documentation exists somewhere on the subject and would be glad to do what I can to further community understanding of the subject. |
When using autoack the event will be acknowledged when the handler is If you want to use async stuff turn off autoack and use manual acks eg: MyHandler(x) { On Thu, Apr 28, 2016 at 6:10 PM, Jeff notifications@github.com wrote:
Studying for the Turing test |
Hi, we have few topshelf services and we do some async processing within |
I am guessing it is a typical async problem where you are blocking things without realizing it. Perhaps you can provide an example and we can look through it. |
Hi @gregoryyoung, we noticed that
is invoked synchronously. May |
it is handled synchronously (by design), if you want it to be asynchronous then make it asynchronous past the handler. |
Arent you pushing users to fall into this trap since you are offering async API but only sync handlers?
… Am 12.12.2016 um 13:52 schrieb Greg Young ***@***.***>:
it is handled synchronously (by design), if you want it to be asynchronous then make it asynchronous past the handler.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
My suggestion is for
|
Yes @alexeyzimarev , but this handler is not awaited under the hood |
but it is awaited inline, isn't it |
Ok but what will happen within
Will it wait until your task finish ? Or just invoke the handler and go further. What if there will be exception within your task ? |
How exactly would you like us to track your position when you are not
processing events synchronously and the processing is happening out of
order? The point of a catch up subscription is to deliver the events in
order.
…On Mon, Dec 12, 2016 at 1:01 PM, Michał Kocik ***@***.***> wrote:
Ok but what will happen within
this.EventAppeared((EventStoreCatchUpSubscription) this, e);
this._lastProcessedPosition = e.OriginalPosition.Value;
Will it wait until your task finish ? Or just invoke the handler and go
further
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#861 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAXRWn6KaG9TbaA56K2vDn3yTvjC5YJfks5rHUW0gaJpZM4H2jk->
.
--
Studying for the Turing test
|
I don't say I want :) I just experienced service hang because I use |
I believe processing can still happen in order and asynchronously at the same time. |
@andrii-litvinov you can do that if you want, the point of a catchupsubscription though is it provides and ordering assurance. |
@gregoryyoung Sure, I understand the idea of having events processed in order. I also think that code that invokes EventAppeared delegate can await it's returning task and only after that record processed position. Of course some changes are required to the internals of a subscription, but it is possible to achieve both ordered and asynchronous processing. What I am trying to say is that asynchronous processing not necessarily mean parallel. |
What would be the point of awaiting an async handler?
…On Mon, Dec 12, 2016 at 1:34 PM, Andrii Litvinov ***@***.***> wrote:
@gregoryyoung <https://github.com/gregoryyoung> Sure, I understand the
idea of having events processed in order. I also think that code that
invokes EventAppeared delegate can await it's returning task and only after
that record processed position. Of course some changes are required to the
internals of a subscription, but it is possible to achieve both ordered and
asynchronous processing. What I am trying to say is that asynchronous
processing not necessarily mean parallel.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#861 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAXRWpoPLDIe_IZm9xgPqglPbhqMfyAkks5rHU1XgaJpZM4H2jk->
.
--
Studying for the Turing test
|
I have checked it and order is preserved only in blocking manner of course(no async await). We apparently need to change the way we handle this eventappeared handler. |
The point would be to preserve the order. To record |
Can't you still do this now by just scheduling an async operation in the
handler eg don't handle directly but schedule your operation?
…On Mon, Dec 12, 2016 at 5:49 PM, Andrii Litvinov ***@***.***> wrote:
What would be the point of awaiting an async handler?
The point would be to preserve the order. To record _lastProcessedPosition
and to invoke a handler for a next event only after handler have finished
processing of current event. And, at the same time, being able to perform
some asynchronous I/O in the handler.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#861 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAXRWggsiwK_kCw9gNlheBRZ8Okh5d9Rks5rHYk4gaJpZM4H2jk->
.
--
Studying for the Turing test
|
That's basically how I do it, I use |
@misiektg86, sure, the code is below. We use is with persistent subscription and it also allows to process messages concurrently where applicable. E.g. commands to place or modify same order will be processed sequentially, but commands for different orders will be processed in parallel (example is completely artificial). And we use BufferBlock from TPL Dataflow. I believe you can strip out the code that is irrelevant in you case, but I hope the idea makes sense. Otherwise please ask. public class MessageBalancer : IDisposable
{
private readonly BufferBlock<MessageItem>[] _buckets;
private readonly ILogger _logger;
public MessageBalancer(ILogger logger, int? degreeOfParallelism = null)
{
_logger = logger;
_buckets = Enumerable.Range(0, degreeOfParallelism ?? Environment.ProcessorCount).Select(i => new BufferBlock<MessageItem>()).ToArray();
foreach (BufferBlock<MessageItem> block in _buckets) Consume(block);
}
public async Task<ExecutionResult> Dispatch<TMessage, TKey>(TMessage message, Func<TMessage, TKey> keySelector, Func<TMessage, Task> handler) where TMessage : IMessage
{
int index = GetIndex(keySelector(message));
ITargetBlock<MessageItem> block = _buckets[index];
var item = new MessageItem { Message = message, CompletionSource = new TaskCompletionSource<ExecutionResult>(), Handler = message1 => handler((TMessage)message1) };
await block.SendAsync(item).ConfigureAwait(false);
return await item.CompletionSource.Task.ConfigureAwait(false);
}
private int GetIndex<T>(T key)
{
int hashCode = key.GetHashCode();
unchecked
{
// A hash code can be negative, and thus its remainder can be negative also.
// Do the math in unsigned ints to be sure we stay positive.
return (int)((uint)hashCode % (uint)_buckets.Length);
}
}
private async void Consume(ISourceBlock<MessageItem> block)
{
while (true)
{
MessageItem messageItem;
try
{
messageItem = await block.ReceiveAsync().ConfigureAwait(false);
}
catch (InvalidOperationException e)
{
_logger.Info(e, "Block was marked as complete.");
break;
}
try
{
Task task = messageItem.Handler(messageItem.Message);
var result = task as Task<ExecutionResult>;
if (result != null)
{
messageItem.CompletionSource.SetResult(await result);
}
else
{
await task;
messageItem.CompletionSource.SetResult(ExecutionResult.Acknowledge());
}
}
catch (Exception e)
{
_logger.Error(e);
messageItem.CompletionSource.SetResult(ExecutionResult.Retry(e.Message));
}
}
}
private class MessageItem
{
public IMessage Message { get; set; }
public TaskCompletionSource<ExecutionResult> CompletionSource { get; set; }
public Func<IMessage, Task> Handler { get; set; }
}
public void Dispose()
{
foreach (BufferBlock<MessageItem> block in _buckets) block.Complete();
}
} |
Thanks @andrii-litvinov |
Hi @gregoryyoung finally I found the issue with deadlock. It turns out that you cannot access EventStore from |
Thanks will look through it.
…On Wed, Dec 14, 2016 at 10:20 AM, Michał Kocik ***@***.***> wrote:
Hi @gregoryyoung <https://github.com/gregoryyoung> finally I found the
issue with deadlock. It turns out that you cannot access EventStore from
Connection.Connected handler on the same thread. Here
https://github.com/misiektg86/SubscriptionDeadlockExample I have created
example with steps to show this issue.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#861 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAXRWjnYGb_ZnzTWbzUEUujpOhn6DLzyks5rH8LjgaJpZM4H2jk->
.
--
Studying for the Turing test
|
@misiektg86 Currently, the events raised by the connection are done so on the same thread that the connection is running on. So when the connection raises the This is something we are aware of, and I have had a look at it, but I want to finish the work I am currently busy with before paying more attention to it. As a work around, you can use
|
@andrii-litvinov This is what we do also but it becomes an issue when your stream has hundreds of thousands of events and need to throttle how many are coming in (which is why async support would be so useful). |
@Salgat, And I very much agree that it would be nice to have async support out of the box here. |
@andrii-litvinov Are you sure? I was under the impression that, while internally it buffers to be able to provide events in a performant manner, it will still try to provide events as fast as your handler can accept them. If that's not the case, what setting in http://docs.geteventstore.com/dotnet-api/3.2.0/competing-consumers/ are you referring to? Also, your idea of closing catchup subscriptions as a throttling mechanism is very interesting, I may look into using that. |
@Salgat, There is parameter |
easy enough to test.
and yes its the buffer size that is passed that controls the max # of in
flight messages
…On Sat, Feb 11, 2017 at 10:11 PM, Austin Salgat ***@***.***> wrote:
@andrii-litvinov <https://github.com/Andrii-Litvinov> Are you sure? I was
under the impression that, while internally it buffers to be able to
provide events in a performant manner, it will still try to provide events
as fast as your handler can accept them. If that's not the case, what
setting in http://docs.geteventstore.com/dotnet-api/3.2.0/competing-
consumers/ are you referring to? Also, your idea of closing catchup
subscriptions as a throttling mechanism is very interesting, I may look
into using that.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#861 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAXRWvSP5y73aa_CXMUd4NTPDkTmbvYYks5rbjH8gaJpZM4H2jk->
.
--
Studying for the Turing test
|
I'm adding my vote and more arguments in favor of adding support for async handlers. As @andrii-litvinov and @Salgat have said before, it's not about parallelism but async I/O. With the current API, we are forced to do:
The code above clearly demonstrates the irony of GetEventStore having an async API and being unable to use it. With all the new async APIs available, it can be much worse, like:
The code above will waste a Thread Pool thread, blocked doing nothing while network and disk operations are performed. I agree that when the only component using threads from the Thread Pool is the EventStoreConnection all this is very much irrelevant, since only one of the threads from the Thread Pool will be blocked and it will have no impact on the application. However, there are plenty of scenarios where the EventStoreConnection will be sharing the Thread Pool with other components. For instance, when running a service that connects to multiple GetEventStore servers and hosts OWIN with WebApi and SignalR middlewares. As you can imagine, that's our case. To make it worse, we also try to run our services on very small VMs which have 2 o 4 virtual cores (which means the optimal number of threads in the Thread Pool is around 2 o 4), and we are currently suffering the consequences of having to .Wait inside OnEventAppeared: WebApi and SignalR calls slow down, EventStore misses hearbeats and disconnects, etc. Adding an Actor that enqueues the messages and processes them on a separate thread is definitely possible, but adds a lot of complexity on our side, having to replicate a lot of functionality that GetEventStore already provides, like throttling. Both WebApi and SignalR already implement a pattern whereby if a handler is a Task, it is awaited by the "engine". If it isn't, it's run synchronously. I hope the GetEventStore team listens to this feedback. Adding this shouldn't be very hard and shouldn't add too much complexity, and in my opinion it will bring GetEventStore fully into the new async world. One thing is for sure: my team and I would benefit a lot from having this, and we are even willing to provide a PR if necessary. |
OnEventAppeared(...)
{
fileStream.ReadAsync(...).Wait();
sqlDataReader.ReadAsync(...).Wait();
eventStoreConnection.AppendToStreamAsync(...).Wait();
smtpClient.SendAsync(...).Wait();
}
Why can't you schedule this yourself?
You have to understand we have underlying code that can be effected
dramatically by what happens in your async operations. I was against even
putting async operations into things like catchupsubsciption as it lowers
our determinism. For me the single threaded model with events pushed off a
thread was far more deterministic.
*I may have a different perspective as I am often called upon to debug
these kinds of support issues.
Greg
…On Fri, May 12, 2017 at 10:20 PM, Rodolfo Grave ***@***.***> wrote:
I'm adding my vote and more arguments in favor of adding support for async
handlers. As @andrii-litvinov <https://github.com/andrii-litvinov> and
@Salgat <https://github.com/salgat> have said before, it's not about
parallelism but async I/O.
With the current API, we are forced to do:
OnEventAppeared(...) => EventStoreConnection.
AppendToStreamAsync(...).Wait()
The code above clearly demonstrates the irony of GetEventStore having an
async API and being unable to use it.
With all the new async APIs available, it can be much worse, like:
OnEventAppeared(...)
{
fileStream.ReadAsync(...).Wait();
sqlDataReader.ReadAsync(...).Wait();
eventStoreConnection.AppendToStreamAsync(...).Wait();
smtpClient.SendAsync(...).Wait();
}
The code above will waste a Thread Pool thread, blocked doing nothing
while network and disk operations are performed.
I agree that when the only component using threads from the Thread Pool is
the EventStoreConnection all this is very much irrelevant, since only one
of the threads from the Thread Pool will be blocked and it will have no
impact on the application.
However, there are plenty of scenarios where the EventStoreConnection will
be sharing the Thread Pool with other components. For instance, when
running a service that connects to multiple GetEventStore servers and hosts
OWIN with WebApi and SignalR middlewares. As you can imagine, that's our
case.
To make it worse, we also try to run our services on very small VMs which
have 2 o 4 virtual cores (which means the optimal number of threads in the
Thread Pool is around 2 o 4), and we are currently suffering the
consequences of having to .Wait inside OnEventAppeared: WebApi and SignalR
calls slow down, EventStore misses hearbeats and disconnects, etc.
Adding an Actor that enqueues the messages and processes them on a
separate thread is definitely possible, but adds a lot of complexity on our
side, having to replicate a lot of functionality that GetEventStore already
provides, like throttling.
Both WebApi and SignalR already implement a pattern whereby if a handler
is a Task, it is awaited by the "engine". If it isn't, it's run
synchronously.
I hope the GetEventStore team listens to this feedback. Adding this
shouldn't be very hard and shouldn't add too much complexity, and in my
opinion it will bring GetEventStore fully into the new async world.
One thing is for sure: my team and I would benefit a lot from having this,
and we are even willing to provide a PR if necessary.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#861 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAXRWqsjiyWbRd8xqZOF7iDiyaGXuMikks5r5M0KgaJpZM4H2jk->
.
--
Studying for the Turing test
|
Regardless, unless we support an infinite unthrottled consumption of
subscription events, we still have to block the handler to manually
throttle it. The thread blocking problem exists in one form or another.
…On May 12, 2017 4:25 PM, "Greg Young" ***@***.***> wrote:
OnEventAppeared(...)
{
fileStream.ReadAsync(...).Wait();
sqlDataReader.ReadAsync(...).Wait();
eventStoreConnection.AppendToStreamAsync(...).Wait();
smtpClient.SendAsync(...).Wait();
}
Why can't you schedule this yourself?
You have to understand we have underlying code that can be effected
dramatically by what happens in your async operations. I was against even
putting async operations into things like catchupsubsciption as it lowers
our determinism. For me the single threaded model with events pushed off a
thread was far more deterministic.
*I may have a different perspective as I am often called upon to debug
these kinds of support issues.
Greg
On Fri, May 12, 2017 at 10:20 PM, Rodolfo Grave ***@***.***>
wrote:
> I'm adding my vote and more arguments in favor of adding support for
async
> handlers. As @andrii-litvinov <https://github.com/andrii-litvinov> and
> @Salgat <https://github.com/salgat> have said before, it's not about
> parallelism but async I/O.
>
> With the current API, we are forced to do:
>
> OnEventAppeared(...) => EventStoreConnection.
> AppendToStreamAsync(...).Wait()
>
> The code above clearly demonstrates the irony of GetEventStore having an
> async API and being unable to use it.
>
> With all the new async APIs available, it can be much worse, like:
>
> OnEventAppeared(...)
> {
> fileStream.ReadAsync(...).Wait();
> sqlDataReader.ReadAsync(...).Wait();
> eventStoreConnection.AppendToStreamAsync(...).Wait();
> smtpClient.SendAsync(...).Wait();
> }
>
> The code above will waste a Thread Pool thread, blocked doing nothing
> while network and disk operations are performed.
>
> I agree that when the only component using threads from the Thread Pool
is
> the EventStoreConnection all this is very much irrelevant, since only one
> of the threads from the Thread Pool will be blocked and it will have no
> impact on the application.
>
> However, there are plenty of scenarios where the EventStoreConnection
will
> be sharing the Thread Pool with other components. For instance, when
> running a service that connects to multiple GetEventStore servers and
hosts
> OWIN with WebApi and SignalR middlewares. As you can imagine, that's our
> case.
>
> To make it worse, we also try to run our services on very small VMs which
> have 2 o 4 virtual cores (which means the optimal number of threads in
the
> Thread Pool is around 2 o 4), and we are currently suffering the
> consequences of having to .Wait inside OnEventAppeared: WebApi and
SignalR
> calls slow down, EventStore misses hearbeats and disconnects, etc.
>
> Adding an Actor that enqueues the messages and processes them on a
> separate thread is definitely possible, but adds a lot of complexity on
our
> side, having to replicate a lot of functionality that GetEventStore
already
> provides, like throttling.
>
> Both WebApi and SignalR already implement a pattern whereby if a handler
> is a Task, it is awaited by the "engine". If it isn't, it's run
> synchronously.
>
> I hope the GetEventStore team listens to this feedback. Adding this
> shouldn't be very hard and shouldn't add too much complexity, and in my
> opinion it will bring GetEventStore fully into the new async world.
>
> One thing is for sure: my team and I would benefit a lot from having
this,
> and we are even willing to provide a PR if necessary.
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/EventStore/EventStore/issues/
861#issuecomment-301188742>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/
AAXRWqsjiyWbRd8xqZOF7iDiyaGXuMikks5r5M0KgaJpZM4H2jk->
> .
>
--
Studying for the Turing test
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#861 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AF_IH_kVpQp5zJ_4IABVZkDV8cFVy72gks5r5M44gaJpZM4H2jk->
.
|
so the subscriptions used to be a single threaded model until it was
intelligently moved to async. Now we should move to be more async. I
maintain that the original model was easier to comprehend.
On Sat, May 13, 2017 at 2:00 AM, Austin Salgat <notifications@github.com>
wrote:
… Regardless, unless we support an infinite unthrottled consumption of
subscription events, we still have to block the handler to manually
throttle it. The thread blocking problem exists in one form or another.
On May 12, 2017 4:25 PM, "Greg Young" ***@***.***> wrote:
> OnEventAppeared(...)
> {
> fileStream.ReadAsync(...).Wait();
> sqlDataReader.ReadAsync(...).Wait();
> eventStoreConnection.AppendToStreamAsync(...).Wait();
> smtpClient.SendAsync(...).Wait();
> }
>
> Why can't you schedule this yourself?
>
> You have to understand we have underlying code that can be effected
> dramatically by what happens in your async operations. I was against even
> putting async operations into things like catchupsubsciption as it lowers
> our determinism. For me the single threaded model with events pushed off
a
> thread was far more deterministic.
>
> *I may have a different perspective as I am often called upon to debug
> these kinds of support issues.
>
> Greg
>
> On Fri, May 12, 2017 at 10:20 PM, Rodolfo Grave <
***@***.***>
> wrote:
>
> > I'm adding my vote and more arguments in favor of adding support for
> async
> > handlers. As @andrii-litvinov <https://github.com/andrii-litvinov> and
> > @Salgat <https://github.com/salgat> have said before, it's not about
> > parallelism but async I/O.
> >
> > With the current API, we are forced to do:
> >
> > OnEventAppeared(...) => EventStoreConnection.
> > AppendToStreamAsync(...).Wait()
> >
> > The code above clearly demonstrates the irony of GetEventStore having
an
> > async API and being unable to use it.
> >
> > With all the new async APIs available, it can be much worse, like:
> >
> > OnEventAppeared(...)
> > {
> > fileStream.ReadAsync(...).Wait();
> > sqlDataReader.ReadAsync(...).Wait();
> > eventStoreConnection.AppendToStreamAsync(...).Wait();
> > smtpClient.SendAsync(...).Wait();
> > }
> >
> > The code above will waste a Thread Pool thread, blocked doing nothing
> > while network and disk operations are performed.
> >
> > I agree that when the only component using threads from the Thread Pool
> is
> > the EventStoreConnection all this is very much irrelevant, since only
one
> > of the threads from the Thread Pool will be blocked and it will have no
> > impact on the application.
> >
> > However, there are plenty of scenarios where the EventStoreConnection
> will
> > be sharing the Thread Pool with other components. For instance, when
> > running a service that connects to multiple GetEventStore servers and
> hosts
> > OWIN with WebApi and SignalR middlewares. As you can imagine, that's
our
> > case.
> >
> > To make it worse, we also try to run our services on very small VMs
which
> > have 2 o 4 virtual cores (which means the optimal number of threads in
> the
> > Thread Pool is around 2 o 4), and we are currently suffering the
> > consequences of having to .Wait inside OnEventAppeared: WebApi and
> SignalR
> > calls slow down, EventStore misses hearbeats and disconnects, etc.
> >
> > Adding an Actor that enqueues the messages and processes them on a
> > separate thread is definitely possible, but adds a lot of complexity on
> our
> > side, having to replicate a lot of functionality that GetEventStore
> already
> > provides, like throttling.
> >
> > Both WebApi and SignalR already implement a pattern whereby if a
handler
> > is a Task, it is awaited by the "engine". If it isn't, it's run
> > synchronously.
> >
> > I hope the GetEventStore team listens to this feedback. Adding this
> > shouldn't be very hard and shouldn't add too much complexity, and in my
> > opinion it will bring GetEventStore fully into the new async world.
> >
> > One thing is for sure: my team and I would benefit a lot from having
> this,
> > and we are even willing to provide a PR if necessary.
> >
> > —
> > You are receiving this because you were mentioned.
> > Reply to this email directly, view it on GitHub
> > <https://github.com/EventStore/EventStore/issues/
> 861#issuecomment-301188742>,
> > or mute the thread
> > <https://github.com/notifications/unsubscribe-auth/
> AAXRWqsjiyWbRd8xqZOF7iDiyaGXuMikks5r5M0KgaJpZM4H2jk->
> > .
> >
>
>
>
> --
> Studying for the Turing test
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/EventStore/EventStore/issues/
861#issuecomment-301189709>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AF_IH_kVpQp5zJ_
4IABVZkDV8cFVy72gks5r5M44gaJpZM4H2jk->
> .
>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#861 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAXRWgr8oybhd8BirlMNo24sjAoXz6Wyks5r5QDJgaJpZM4H2jk->
.
--
Studying for the Turing test
|
@gregoryyoung I think I completely understand your position. Supporting async operations is more complex than not, and as the owners of the library it's also your decision. However, please consider:
Sure we can, we now need to, and we will have to deal with the extra complexities. If we set these as the goals of the perfect solution:
The first and most simplistic approach that comes to mind has already been described and consists of having an Actor processing the messages. That frees up the thread running OnEventAppeared immediately, but it has a few important issues:
Given all the above, I would argue that the point of libraries is to prevent all that complexity from being dumped on the clients. Perhaps a compromise would be to add an extra class to the API that would handle all of this? I understand this is how catch up subscriptions came to exist, to handle the complexity of transitioning from catching up with existing events to live events, instead of letting clients deal with it. The GetEventStore team obviously know their threading model the best and could implement this in the best possible way. This approach would give them control if/when they decide to change the whole thing, as long as the async contract is kept? I know I'm putting a lot on the same post but I'm trying to be complete in my analysis in the hope that we can get either a change in the library or a recommended approach to achieve all goals 1, 2, and 3. Cheers |
I think it's OK to block a dedicated thread. If you have good reasons to need a dedicated background thread in the first place, that is. What I don't think is OK is to block a thread pool thread. It also doesn't have to be like that. All I/O operations can be done asynchronously. It might make other things more complex, or break your current model, but it certainly could be done in a way that all network operations inside the connection could be handled asynchronously, that is, not parallel but not blocking. I agree that if your interface wasn't async, expectations wouldn't be so high :-) It would probably also make GetEventStore less scalable and ultimately less attractive. I should also add that GetEventStore is giving us so much already, and we really like it. We use it as a message bus through which applications communicate, as well as a more typical event store, and so far it has been able to handle it all impressively. The ops story is also great. This is me asking even more from it! |
Hi. I have just created PR #1310 to implement this: Func<TConnection, ResolvedEvent, Task> |
Looks like this was fixed with #1310 |
@dnauck showed me some event store projects he'd built at a recent event. He was using asynchronous libraries inside the
Action<EventStoreSubscription, ResolvedEvent> eventAppeared
. Because these action delegates are sync he had the following possibilities:We all agree that async void is possibly the worst variant a user could choose. The other ones are cumbersome. So how about we change it to
and apply the same change to the other variants of the API.
If you guys agree that it would be a good change I could send in the PR. There are multiple ways to handle this.
IHandle
(step by step)Caveats: This is a breaking change.
What's your thoughts?
The text was updated successfully, but these errors were encountered: