-
Notifications
You must be signed in to change notification settings - Fork 3
[EventsSubscription] Deduplication for consumers #371
Description
Whereas there are very smart solutions, and I learned something (like injecting transaction context - how would you do something like that in C# (reflection / aspects)? Looks like a JS hack.
In any OOP language, you can use a decorator pattern or inheritance e.q.
class ApplicationCommandWithContext extends ApplicationCommand {
constructor(readonly context: PrismaTransactionContext, ...restOfParams) {
super(...restOfParams);
}
}
// and then
if (command instanceof ApplicationCommandWithContext) {
}
I use JS hack because it's simpler and faster to code.
I think I've made a design mistake before, and it's dangerous to follow this approach.
Expansion of transaction context is probably not a way to go.
With that, our solution is not scalable and slices logic is coupled with another slices.So, I propose to make something easier and more scalable.
Like in every messaging system, we need to handle message duplications instead of introducing transactional consistency.
Let's assume for a while, that source for subscriptions is not the database where we're storing progress, but some message broker etc. In this case, it's impossible to have transaction with covers also the side effect.
So I propose to leave subscriptions without transaction (add it only to EventStore) and accept possible duplicates.
What do you think?
I see some pros and cons of using the current implementation of transactional consistency:
- pros:
- it's the fastest possible implementation because it builds the whole transaction query and executes at once (there
is no long-living active transaction) - we've read model consistency for free
- it's the fastest possible implementation because it builds the whole transaction query and executes at once (there
- cons:
- it's buggy prone. Only code review can ensure that it's correctly used by other developers
- it can't be abstracted. We're bounded to PrismaPromise
- there is still need to manualy handle some errors (e.g. when
executeAfterTransaction
fails, external resources)
I agree with you that we should abandon transactional consistency. In the end, it's harder to ensures and code.
We can try to mitigate duplicates with some buffer of processed commands?.
As I understand, we have 3 cases when onEvent
is invoked:
- to update read model
- to perform side-effect on external resources (e.g. learning-materials-url)
- to send command
I think that we can mitigate duplicates 'globally' in the 1st and 3rd cases:
- We can add the
version
field to each read-model. Theversion
points toeventId
orglobalOrder
and represents
the current state of read-model up to this event. - We can somehow use stored metadata (e.g.
causation_id
) to filter out duplicates.
Originally posted by @HTK4 in #364 (comment)