Skip to content
This repository has been archived by the owner on Apr 18, 2024. It is now read-only.

Port distributed workers to use reliable delivery work pulling #186

Merged
merged 12 commits into from
Jun 9, 2020

Conversation

chbatey
Copy link
Member

@chbatey chbatey commented Jan 25, 2020

Local push of the reliable delivery PR

Copy link
Member

@patriknw patriknw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

great to see this in action

@@ -35,14 +32,12 @@ object Worker {
workExecutorFactory: () => Behavior[ExecuteWork] = () => WorkExecutor()): Behavior[Message] =
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

workManagerProxy can be removed, not used


// the set of available workers is not event sourced as it depends on the current set of workers
var workers = Map[ActorRef[WorkerCommand], WorkerState]()
var requestNext = Queue[RequestNext[WorkerCommand]]()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure I understand why this would have to be a Queue? isn't it enough with Option?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That was my misunderstanding of how the fan out producer worked


def notifyWorkers(workState: WorkState): Unit =
def tryStartWork(workState: WorkState): Effect[WorkDomainEvent, WorkState] = {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't there be something for RecoveryCompleted that would resend the pendingWork to the WorkPullingProducerController?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. This will have to wait for demand from the WPPC. I wonder if we should allow something similar to sharding where we can send more than one message?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's convenient with the buffering, but also easy to abuse a loose the flow control. For sharding there is not much choice because it must be possible to send to a new entityId (that hasn't been started yet). I think we shall wait and see if we get any more feedback about it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is the workInProgress that we need to resent, the pendingWork hasn't been sent yet and will when demand is received. We can have an event that puts work in progress back into the pending queue

@chbatey
Copy link
Member Author

chbatey commented Mar 17, 2020

I'll pick this back up now

@chbatey chbatey marked this pull request as ready for review March 18, 2020 09:11
Copy link
Member

@patriknw patriknw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looking very good, just a few small things and then this can be merged


def notifyWorkers(workState: WorkState): Unit =
def tryStartWork(workState: WorkState): Effect[WorkDomainEvent, WorkState] = {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's convenient with the buffering, but also easy to abuse a loose the flow control. For sharding there is not much choice because it must be possible to send to a new entityId (that hasn't been started yet). I think we shall wait and see if we get any more feedback about it.

@patriknw
Copy link
Member

@chbatey for next gardening days it would be great to complete this

Copy link
Member

@patriknw patriknw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we forgot this again, but now better to update to 2.6.6 and enable sbr before merging

@chbatey
Copy link
Member Author

chbatey commented Jun 9, 2020

i'll do that now

…ithub.com:akka/akka-samples into wip-chbatey-reliable-delivery-distributed-workers
Copy link
Member

@patriknw patriknw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@patriknw patriknw merged commit 9c3e0a6 into 2.6 Jun 9, 2020
@patriknw patriknw deleted the wip-chbatey-reliable-delivery-distributed-workers branch June 9, 2020 10:14
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants