Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Event Store Improvements for v4 #1307

jeremydmiller opened this issue Jul 7, 2019 · 6 comments

Event Store Improvements for v4 #1307

jeremydmiller opened this issue Jul 7, 2019 · 6 comments


Copy link

@jeremydmiller jeremydmiller commented Jul 7, 2019

WIP: I just got back from a vacation and got to thinking about the event store after getting enough rest for once

Big Existing Issues

Other ideas for improvements

  • Maybe the async daemon gets completely rewritten with RxExtensions as opposed to the TPL Dataflow. I like the Dataflow lib personally, the an easy way to deal with the async daemon and multi-tenancy is to split streams by tenant and use a separate document session per tenant as necessary. Either way, I want the async daemon a bit more optimized for rebuilds and regular projections

  • Possibly do either a sample app or a pre-built app that hosts the async daemon process. We could go super slim or build out full blown Azure and/or AWS infrastructure for monitoring and maybe an admin UI. Some kind of support for clustering the daemon with failover. Some kind of support for triggering rebuilds? I'm already dreading the arguments over exactly what technology stack to use, but oh well.

  • Possibly a different pre-built application that incorporates some kind of service bus or queuing mechanism to pipe events captures in the DocumentSession through a listener to a queue where the projections would be built by some kind of lightweight (or just the real thing in a slightly different mode) async daemon. We'd have to deal with some message sequencing to make that work, but it's possible. Not a slam dunk 'cause some projection types have to be singletons because they're stateful

  • Projection snapshotting. Like maybe you take a snapshot of an aggregate every 5 events and store that so that on domain aggregations are much faster. Kind of a hybrid between live and on demand. Plenty of folks have asked for this over the years.

  • Add extra, extending interfaces on top of IProjection that could refine the behavior of the async daemon for better efficiency. Stuff like, "does it need event metadata at all, or just the event data?" or "does it aggregate one stream at a time" that might change how the async daemon would work, especially for rebuilds.

  • I would like to see us do a full replacement for both the existing Aggregator implementation and possible ViewProjection. I've got some ideas for this, but haven't written anything down yet. Don't scream at me yet;)

  • Possibly do adapters so you can use existing projection libs like Liquid Projections from within Marten


This comment has been minimized.

Copy link

@oskardudycz oskardudycz commented Jul 9, 2019

@jeremydmiller Thank's for this write-up! Here are my thoughts and other ideas that I was thinking on:

Async Daemon

Internals of it's implementation are still enigmatic for me, so to give more detailed answers. So at first I'd propose to provide good documentation and samples for that (as we still lack it, afaik it's still only your blog post about that).

  • Multi-Tenancy - For sure in my previous commercial project, as we were using extensively Conjoined Tenancy it was the issue that we were not able to rebuild projections. So from my perspective that's really important, although I'm not sure what's the adoption of that feature - so how much our users needs that.

  • Restart - I don't fully get the description of this feature. That sounds needed although it's hard to judge for me the complexity

  • Tombstone placeholders - it's the same, I'm not able to judge the complexity. Feature for me looks like nice to have but not top priority.

  • Clustering, ReactiveExtensions, Performance, etc. - All f that features are great, I'd gladly participate with implementing that, although my biggest concern is related to our capabilities. So having our timeboxes if we're able to deliver in set period of time to be production ready. Having cluster and fully fledged async support won't be easy, it's scope for even separate project/library. That might end up with writting our own Kafka/RabbitMQ (plus, monitoring, metrics, failover strategy etc.). I know that if we'd have good plan and work breakdown and focused as a group on that then we'd be able to deliver something simple and production ready, but the question is if people would use it? My perception is that we'd focus on AsyncDaemon being projection rebuild mechanism or the integration point for the outside world, but keep is as thin, simple and performant as possible. Imho instead of making it bigger we could look on the possibilities to use some already built tools or provide integrations with already existing messaging solutions. I love the idea of using Reactive Extensions, that could make the integration with others easier.

I like the idea for the prebuilded apps and samples with some integration to other tools. Regarding which cloud? Dunno, I still believe that Azure is poor mans AWS, but on the other hand, .NET community is in love with the Microsoft tools like MSSQL and others, so probably it would be better to start with Azure. Maybe with some cooperation with Microsoft it would give us some grants or at least marketing?

Partitioning/Multi Tenancy

Imho that's must have. As I gathered recently people's fears about Event Sourcing - performance is one of them. Also when I was explaining Marten to new people there is always a question (how single events table will handle the big load). Although I think that those fears are exaggerated, then I see a point of making our store more performant and also giving people hard numbers that "yes, we can do it, see, there is no point of being afraid".
So built in Partitioning and checking if TimescaleDB is really so performant as Internet describes it would help us on deliver that (there is an issue placed by @cocowalla that I'm still unable to reach and check (#1262).
I was also thinking about new Multitenancy for the Event Store - "Table per Stream". That would give possibility to logically split the tables into smaller chunks. So to have at least some simmilar Topics/partition split as Kafka/Rabbit MQ have. It might appear that partitioning would be such tenancy in fact.


I fully agree, current ViewProjection mechanism is hard to maitain. I'm currently working on #1302. I already started some small unification of projections mechanism to make it (at least from the abstractions perspective) more generic. I was thinking that maybe that would be good start for discussions around the potential refactoring? I could provide my first PoC and from that having some concrete proposal we could work to make it right?
Or if you prefer to come up with initial changes by yourself then I could focus on fixing this one issue and leave the rest for you. What do you think?

About Projection snapshotting - it would be nice to give some flexible way of snapshoting. I'm not sure if doing snapshot one per few times would be huge benefit, but if we give eg. possibility to define that she/he would like to have it once per day, or other custom filter expression - then imho that might be huge benefit.

I think that also two other types of projections are low hanging fruits and would be good "marketing" for coexistance with/migration from the ORMs like:

  • flat table projection - having that people might use Marten as Event Store and EF/Dapper as the read model, or have mixed solution like some modules with EventSourcing with Marten and readmodel with ORM and some modules (like admin ones) fully with EF,
  • transform projection - something similar as we have right now for the transforming events, but that would store the new state of the record in separate row. That would be imho great solution for the keeping history of the records, that's quite common issue with "traditional systems" (so you have the regular most recent state of the entity in the table + separate table with history of the record).

I'm all up for making this pluggable for other solutions (eg. Liquid projections) 👍 .

Event Metadata

I think that it's must have. For sure it should be optional, but for the distributed systems things like CorrelationID is must have. I was also thinking about possibility to give user to decide that Metadata will be mapped by convention to the event fields (eg. Version, Timestamp). That could be huge relief for the Event Sourced aggregates. See more in my comment: #1299 (comment)

I think that it would be worth to check how NEventStore is handling that - as I know they have quite good implementation of the Metadata.

Other things that I consider

Integration with messaging systems

It's not easy for those systems to always keep the ordering of events, and it's rare for those systems to have "exactly once delivery" semantic. Normally consumers need to handle indempotency by themselves.

Currently Marten doesn't allow to put events out of order (so eg. 2nd, 1st, 3rd). We'd need to change the current versioning mechanism to allow that and projections rebuild.

Imho it shouldn't be super hard to deliver first option to give user possibility to set the version number for imported events.

We discussed some time ago that maybe mechanism simmilar to Async Daemon would be also some potential options for that.

Integration points with other Event Stores / UI

I'd like to create the integration point as I described here: #1194 (comment) and discussed with @gregoryyoung. So start with exposing our event store features as atom feed. Then maybe provide some swagger like simple UI (that might be also used for document part).

Long Version for Events

#1080 - imho this is must have for the version 4.0 if we'd like to make it high scale.

@jeremydmiller what ar your thoughts?

I probably forgot about something, so I might add something later.

@oskardudycz oskardudycz added this to the 4.0 milestone Jul 10, 2019

This comment has been minimized.

Copy link
Contributor Author

@jeremydmiller jeremydmiller commented Jul 24, 2019

@oskardudycz I say we just convert to long ids for 4.0. Will have to explicitly test for the migration scripts, but that was coming regardless.


This comment has been minimized.

Copy link

@oskardudycz oskardudycz commented Jul 24, 2019

@jeremydmiller great 👍


This comment has been minimized.

Copy link
Contributor Author

@jeremydmiller jeremydmiller commented Jul 24, 2019

More on the async daemon

  • For rebuilds of any kind of aggregated document, we could hugely optimize perf by doing lookaheads for which projections should be deleted and hence, not rebuilt at all
  • Rebuilds of per-stream aggregates could be very heavily optimized by fetching stream by stream
  • as much as possible, we want projections to expose exactly which events they consume because that heavily optimizes data fetching
  • we should try to use more information about the projections to customize the various queues and future Rx operator combinations

Convention Based Projections Concept

The main idea here is to allow users more flexibility to do whatever it is they need to do with less code ceremony and easier to author code. Drop mandatory base classes and interfaces (they're still there, just wrapped around it). Marten itself will use some kind of dynamic code generation (ala Jasper or Lamar from Jeremy's prior work) to create an IProjection implementation around their code.

The following shows some of the possible method signatures and the hopefully minimal set of optional attributes:

// The existence of this event will cause the aggregate
// Exposing this will allow the async daemon and projection rebuilds to be optimized

// or by marker interface:
public interface IDeletedBy<TEvent>{}

// OR maybe--->
[Publishes(typeof(Type), AggregatedBy.Stream)]
[Publishes(typeof(Type), AggregatedBy.Tenant)]
[Publishes(typeof(Type), AggregatedBy.Event)]

// the aggregation across events is done some other way
// like maybe by "region" or "business line"
[Publishes(typeof(Type), AggregatedBy.Other)]

// or via a marker interface as an alternative:
public interface IPublishes<T>
    AggregatedBy AggregatedBy { get; }

public class MyProjection
    // this would be used to pluck the identity of the published
    // document out of an event object
    // the event type could be an interface, abstract type, or individual
    // concrete event type
    // If using this mechanism, the projection around this class would
    // be responsible for loading the existing projected document in the 
    // course of updates
    public Guid/int/long/string Identity(EventType @event)

    // Alternative to Identity, this time you'd do whatever to load the projected document
    public SomeAggregate Find(SomeEventType event, IDocumentSession session);
    // or do it async, preferably
    public Task<SomeAggregate> Find(SomeEventType event, IDocumentSession session);

    // actually apply the event, somehow. All of these would be valid options
    // EventType could be the specific event concrete type, a common interface,
    // or a base type. Could also use Event<T> for metadata as well
    public void Apply(EventType @event, ProjectedDocumentType projection);
    public Task Apply(EventType @event, ProjectedDocumentType projection);
    public void Apply(Event<EventType> @event, ProjectedDocumentType projection);
    public Task Apply(Event<EventType> @event, ProjectedDocumentType projection);

    public void Apply(EventType @event, ProjectedDocumentType projection, IDocumentSession session);

    // maybe allow method injection from the app's IoC container
    public void Apply(EventType @event, ProjectedDocumentType projection, IDocumentSession session, [FromServices] ISomeServiceInYourApp);

    // NOt sure this is 100% necessary, but know if the projected document would be deleted
    public bool ShouldBeDeleted(Event<T> @event);
    public bool ShouldBeDeleted(EventType, @event);

    // UNKNOWN --> optimize for using Partial updates vs. full blown get the existing document and updating


This comment has been minimized.

Copy link

@ericgreenmix ericgreenmix commented Jul 31, 2019

Anything that would increase the performance of rebuilding projections in the Async Daemon, would be huge for us. Snapshotting and the performance optimizations for rebuilding that @jeremydmiller was mentioning would be great.

For context, we currently have >3 million events in our event store and are storing ~25k new events per day now. Our rebuilding performance has noticeably gotten worse as the number of events increase.

I am definitely willing to help contribute to any of these event store improvements for v4.


This comment has been minimized.

Copy link

@jacobpovar jacobpovar commented Aug 16, 2019

Some observations based on our usage of Marten.

  • Event metatada

  • Snapshots, as mentioned above

  • HTTP subscription API or ATOM feed.

  • Allow to read events without the need to deserialize them into CLR objects. This would be helpfull in cases like direct streaming to HTTP response. Another example would be inspecting event metadata to see if event should be deserialized and processed futher

  • ability to store event progression value outside event store mt_event_progression table. Imagine if you are storing read models in separate database, then you'd want to save last sequence value near read models within same transaction.

  • stream archiving. One of the fears that prevent people from using Event Sourcing it that system will become slow because of the need to process obsolete data.

  • Provide guidance on event versioning strategy. Maybe by upcasting, being both simple and widely used solution. This is pretty complex topic, and can implemented on top of existing capalities, so I'm not sure that Marten should try to cover all possible solutions. A simple demo in docs would be a good start

  • Daemon clustering looks promising

  • As a general observation, it would be great if some of Marten ES internals were more flexible or customizable. For example, we had to copy most of daemon source into our codebase to modify how SQL queries are composed. Did the same for event type mapping strategy. However, most users won't need to do something like this. Its only a wish :)

I'm glad to help with some of these improvements. First one will probably be metadata.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
4 participants
You can’t perform that action at this time.