Skip to content
This repository has been archived by the owner on Sep 20, 2019. It is now read-only.

[Question]: how does this package handle failed jobs when using async queue? #172

Closed
diggersworld opened this issue Jun 24, 2019 · 2 comments

Comments

@diggersworld
Copy link

One thing I'm aware of with the Laravel queue is that should a job fail the queue doesn't stop running, it merely moves the job to the failed jobs table, then moves onto the next job. Let's say the failing job is caused by the projector not being able to handle a particular event for whatever reason.

If the application in question has additional jobs in the queue which apply to the same source and therefore build on top of the projection that our failed job couldn't create/update, is there then not a problem in that the queue will continue processing and the projector will either update a projection that has missing data, or alternatively (if it was never instantiated) just result in a load more failed jobs?

@guitarbien
Copy link
Contributor

Hi, I have the same question, too. When using event sourcing, handling events in sequence is a very important thing, if not, the aggregate root will be wrong, or can't reconstitute. So I think when one event fail, all others shouldn't be handled. Actually we can do this by ourself in the aggregate root, something like:

    // a method in aggregate root
    public function prepareOrder(string $timestamp): OrderAggregateRoot
    {
        if (!$this->picked) {
            throw CouldNotChangeStatus::notPickedYet();
        }

        if ($this->prepared) {
            throw CouldNotChangeStatus::alreadyPrepared();
        }

        $this->recordThat(new OrderPrepared($timestamp));

        return $this;
    }

Just don't execute recordThat() when it shouldn't.


And I also have a related question about queue job. As above, the sequence of events is important, so we should only use one consumer to handle one queue, is that right?

That's say I'm building a EC application, and I don't want the order of a customer slow down the other customer. For example there are process events with a normal order: OrderCreated, OrderPaid, OrderNotified, OrderPrepared, OrderDelivered, if I only use a consumer (worker) to handle all these events, when there are multiple orders then my application couldn't handle the orders in parallel.

Is there any suggestion? Thanks.

@freekmurze
Copy link
Member

In v1 of this package there was some extra functionality that guaranteed that a projector wouldn't even get newer events when processing an older event failed. Because it was quite heavy logic and difficult to get right I removed this from v2.

So currently this package doesn't handle failed jobs. You should take care of that yourself. If the order of events is important for your projector you can't indeed just process the next job. The responsability of handling these situations lies with your application.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants