New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Duplicate uuid/version events prevent further updates #170
Comments
We had the same issue and we had to manually delete one of the duplicated events. Wonder why this happens too. |
@UncertaintyP turns out this is a design issue. we as developers should design the functions to be idempotent to protect this. it is a valid exception the aggregate throws. @UncertaintyP would love to discuss your use case. you can reach me via my profile. |
@dilab Sorry I don't see how to contact you. My use case is pretty simple. I want to update a User model. I was relying on the documentation stating: |
@UncertaintyP |
|
@freekmurze is it possible to add lock mechanism to this package to prevent concurrent requests? |
I also had this same issue. My fix as I was up against it at the time was to manually delete the event in the stored events table, which allowed my system to unlock. My duplicate event appeared within the events table only, so the aggregate root was out of sync to my projections (although this time it was the aggregate root at fault). I do have Where you see 4 lines, there should only be 2. Only 2 events fired as reflected in my projections. |
@UncertaintyP are you using transaction somewhere in this particular request? |
@dilab No. This is simply recording the event and the projector uses Laravels User::update function. Also no queues. |
@UncertaintyP @dilab This is the same in my case too. I am having concurrent requests being generated via a standard button click through to a controller. I do have queues running too but have isolated certain cases to actions where no queues are running. I do not really understand how a concurrent request can happen in these cases. I am not using transactions either. I am wondering whether adding an atomic lock prior to calling any methods on the aggregate root will fix this or whether something is going on past this first call. i.e. Cache::lock('MyAction')->get(function () {
InventoryAggregateRoot::retrieve($this->uuid)->allocateOrderItem($this->item)->persist();
}); I am trying this next. |
I have found another instance where the aggregate has 2 events under the same aggregate_version, but they were created 20 mins apart. So these are certainly not the same process and an atomic lock will not help. What could cause the same aggregate version to be used across this length of time? Same aggregate_uuid, same aggregate_version, created at 21:05 & 21:26 respectively |
@booni3 are you using (non-sync) queues? |
@erikgaal I have seen the issue in 2 scenarios: Same aggregate version, same timestamp: Same aggregate version, different timestamps: 'supervisor-allocation' => [
'connection' => 'redis-long-running',
'queue' => ['allocation-high', 'allocation'],
'balance' => 'false', // process queues in order
'processes' => 1,
'tries' => 1,
'timeout' => 300, // Timeout after 5 minutes
] |
@booni3 what is the size of the aggregate? I meant how many events are recorded for that particular UUID. |
@freekmurze Did some research over the weekend. This package is unable to deal with the race condition. For instance if you have a race condition like two events with the same ID and the same version will be saved at nearly the same time. This can be simulated using apache benchmark, for example One way to solve this is to use a composite primary key consisting of the aggregate ID and the version number, however, this package does not save unique version number per aggregate. So my suggestion is to do:
What do you think? If okay, I will send a PR . |
Hi all, sorry for the late reply. I was reading through all your comments and was thinking that a composite key would indeed be the best solution, let the database deal with the race conditions in order to prevent invalid state. This means that the developer should still handle those cases if they occur. @dilab thanks for the thorough investigation, yes feel free to send a PR! |
FYI I'm working on a PR myself. |
Status update: unfortunately adding a unique constraint breaks aggregate roots with There are two options in dealing with this breaking change:
I prefer option 2, but would like some input. |
I think the deprecation and optional migration is a good solution for people who need the fix right away. We can then remove the concurrency functionality in the v5 branch |
PR is here: #212 |
@brendt awesome, thanks! Any plans to add a retry mechanism to this package? Before we switched to this package we had a custom event sourcing implementation that caught the unique constraint violation and retried X times before giving up, allowing you to still have some concurrency but without giving up the integrity of the aggregate. |
@rovansteen We could add a function that retries x-amounts of time but it sounds to me that it would only solve part of the problem. My preference would be to have a mechanism that ensures only one persist can be done simultaneously for specific operations that you know might happen concurrently, and that will automatically refresh the aggregate root as well. We already have an excellent queuing system built-into Laravel, so I'd prefer to look in that direction first. |
As a sidenote: we were already planning on adding a command bus in v5, which would be one way of solving the issue: only allow one concurrent command at a time, running in a job. |
@brendt great, any timeline to release this? |
@rovansteen we are using the laravel built-in queue retry mechanism to handle the case you have mentioned. |
Released as 4.10.0: https://github.com/spatie/laravel-event-sourcing/releases/tag/4.10.0 |
Here's the followup discussion about better handling concurrency: #214 |
First I want to thank you guys for your awesome packages and the hard work you put into them. ❤️
I was trying out this package in local development (Nginx + PHP-FPM, MySQL). After some time this system goes dormant and the next request takes some time to be processed. In this state I (accidentally) made 2 concurrent requests to update the same aggregate. The corresponding controller function:
This resulted in a duplicate entry in the database, see:
Note: The
UserAggregateRoot
does NOT haveprotected static bool $allowConcurrency = true;
Now every update request to that specific aggregate (user) results in the following error:
[aggregate_uuid, aggregate_version]
? If so, should this be highlighted in the docs?allowConcurrency
afterwards to the root allows for updating and inserts a new event with version 3.Best regards,
Marcel
The text was updated successfully, but these errors were encountered: