Skip to content
Joscha Götzer edited this page Jan 19, 2021 · 8 revisions

Typically, e.g. in web applications and application servers etc., you'll some time reach the limit as to how much it makes sense to process in a single request from a client.

In my experience, over time, requirements will accumulate on the form "when X happens, then Y should also happen", which - if you brute force implement these things, just makes the time it takes for your server to process a single request increase steadily over time.

Why is this a problem?

As I see it, there's three problems with this:

  • it makes your application suck, because it just takes longer and longer to process requests, the users will percieve the application to be slow and unresponsive
  • you might get to a point where a server crash in the middle of a request will leave some side effects, even though any db transactions etc. weren't committed, e.g. if you're calling out to web services or generating files as part of handling the request
  • it's a problem that your unit tests will grow and grow and constantly need to have new stuff mocked in order to pass - on some level, it's hard coupling
  • there's an aesthetic problem as well: you'll most likely end up mixing all different kinds of bounded contexts together that would otherwise work independently if you were being true to the domain

How can Rebus help with this?

Simple: Each time you process a request, you concentrate on serving that request. And then, all requirements on the form "when X happens, then Y should also happen" are implemented with publish/subscribe.

For example, when recording a purchase transaction, inside the PurchaseTransaction you might see code like this:

public class PurchaseTransaction : FinancialTransaction
{
    // (...)

    public void Record()
    {
        // carry out some internal domain logic type of stuff here,
        // and then:

        DomainEvents.Raise(new DomainEvents.PurchaseTransactionRecorded(this));
    }
}

which via Udi Dahan's immensely awesome domain events implementation will translate into a await bus.Publish(new Finance.Messages.PurchaseTransactionRecorded( /* fill in relevant fields here */)), allowing interested parties to (asynchronously) subscribe.

E.g. there might be a LetterService listening to this event, which ensures that a receipt is generated and emailed to the customer when this happens, etc etc.

The LetterService endpoint would then be configured to allow for it to await bus.Subscribe<Finance.Messages.PurchaseTransactionRecorded>(), which means that it must either have access to a centralized subscription store or know who owns this particular message type.

We can declare message ownership like this:

Configure.With(activator)
    .(...)
    .Routing(r => r.TypeBased()
                   .MapAssemblyOf<Finance.Messages.PurchaseTransactionRecorded>("the_publisher"))
    .(...)

thus declaring that the publisher, whose input queue is the_publisher, owns all the messages from the Finance.Messages assembly (which we assume contains the PurchaseTransactionRecorded event above).

Cool!

Yes, this is very cool. It allows your server to concentrate on doing the least amount of work necessary in order to process its requests, and then different bounded contexts can eavesdrop and start working on stuff as a consequence of this.

If you do this right and rely mostly on publish/subscribe to spawn off work, you'll most likely get to a place where your system is much more open for extension, closed for modification than it otherwise would have. And that is one of the most attractive places to be in software engineering, I promise ;)

Clone this wiki locally