-
-
Notifications
You must be signed in to change notification settings - Fork 443
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Archive event streams #49
Comments
Consider replacing all events in the original store with a special archived event, then if the aggregate is loaded, throw a |
@MartinHave What do you think about this idea? Specifically the concept of throwing an exception if the aggregate is loaded and archived. |
@rasmus Why would we want to do that? Could you specify when this would be handy? As a base rule, we should never do anything else than appends on the event store. |
@MartinHave When a aggregate is deleted, then we would like to archive the event stream somewhere else to minimize storage so that our event store doesn't grow indefinitely. Thus we would like to be able to archive an aggregate, i.e., move the event stream to some cheaper storage. The idea was, as we move the event stream, then if a aggregate by accident is created with the same ID as an archived, then it should throw an error. |
@rasmus There are many cases where it is very important to keep the event stream intact. What about new views that needs to aggregate across many events even those that have ended there aggregate lifetime. I I would want to create a new statistics view, then I would not get a complete picture. I would argue that we should NEVER move event from the event stream. |
@MartinHave How about creating a concept of a primary and secondary event stores? Then if you want to archive an infrequently used (or deleted) aggregate, then move the event stream to the secondary and cheaper event store. The event stream would still be accessible, but it make take a bit longer to load. The application should load the aggregate normally and it shouldn't be possible to tell if you load an aggregate from primary or secondary on anything but the load time. |
@MartinHave @rasmus This subject came up recently for us in context of GDPR and "right to be forgotten". Any thoughts on how append-only world should deal with such cases? |
@PiotrBrzezianski its easy, delete everything. In cases where you can't delete, use a strong one-way hash (please check with your legal department in a case-by-case to verify that this indeed a possibility). The "append-only" and "don't modify event streams" falls apart when in comes to GDPR. By law you are to delete everything. Even event times are considered PII as they can be used to identify individuals based on when they were online. In our department we are quite take the matter quite seriously (as should everyone), and delete everything we have on a person and verify it afterwards if they issue a GDPR DDR. |
@rasmus sure that's always an option but I like the idea that the stream would still exist just replaced with a special event. It makes it more explicit that it is not a mistake trying to get to that aggregate id but an intentional action. Anyway archive would be quite high on my list of features because the entities we deal with simply become irrelevant eventually whether it is for any operations on them or even statistics. |
Back from sabbatical. Archiving event streams is very high on my wish list as well and I did a PoC for it in #464, or the initial beginnings. I would like to have support for the |
Hello there! We hope you are doing well. We noticed that this issue has not seen any activity in the past 90 days. If you still require assistance with this issue, please feel free to reopen it or create a new issue. Thank you for your understanding and cooperation. Best regards, |
Hello there! This issue has been closed due to inactivity for seven days. If you believe this issue still Thank you for your contribution to this repository. Best regards, |
As an alternative to the of deleting event streams described in #48, we should provide functionality to ease the process of archiving an aggregate event stream.
The text was updated successfully, but these errors were encountered: