Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Autoconfigure automatic load balancing #2453

Closed
CodeDrivenMitch opened this issue Oct 24, 2022 · 3 comments · Fixed by #2750
Closed

Autoconfigure automatic load balancing #2453

CodeDrivenMitch opened this issue Oct 24, 2022 · 3 comments · Fixed by #2750
Assignees
Labels
Priority 2: Should High priority. Ideally, these issues are part of the release they’re assigned to. Status: Resolved Use to signal that work on this issue is done. Type: Feature Use to signal an issue is completely new to the project.
Milestone

Comments

@CodeDrivenMitch
Copy link
Member

Feature Description

Axon Server EE has an option to load-balance event processors automatically over multiple instances of the application. However, it now requires a manual action in the Dashboard to set.
It would be great if this can be automatically updated when the event processor is launched.

As a suggestion, it can be part of the call done periodically to update the event processor status in the dashboard.

@CodeDrivenMitch CodeDrivenMitch added the Type: Feature Use to signal an issue is completely new to the project. label Oct 24, 2022
@smcvb smcvb added Priority 3: Could Low priority. Issues that are nice to have but have a straightforward workaround. Status: Under Discussion Use to signal that the issue in question is being discussed. labels Oct 26, 2022
@smcvb
Copy link
Member

smcvb commented Oct 26, 2022

How would this work if two Axon Framework application share a different configuration?
Well you let it switch back and forth?
Will one of the applications take control?
Or, users discretion?

@CodeDrivenMitch
Copy link
Member Author

Event processors are unique by applicationId,processorName,tokenStoreIdentifier. Configuration could differ during a deployment where the property is switched only. Or well, it should.
Axon Server could be smart enough to only switch the mode on the first reporting call of each instance.

@Rafaesp
Copy link

Rafaesp commented Feb 10, 2023

I think this feature is very important. We don't fully benefit from multiple segments being read in parallel by different JVMs if auto load balancing is not enabled, right? This is a custom feature every customer would need to implement.

@smcvb smcvb added this to the Release 4.8.0 milestone Feb 10, 2023
@smcvb smcvb added Priority 2: Should High priority. Ideally, these issues are part of the release they’re assigned to. and removed Priority 3: Could Low priority. Issues that are nice to have but have a straightforward workaround. labels Feb 10, 2023
@smcvb smcvb self-assigned this Jun 13, 2023
@smcvb smcvb added Status: In Progress Use to signal this issue is actively worked on. and removed Status: Under Discussion Use to signal that the issue in question is being discussed. labels Jun 13, 2023
smcvb added a commit that referenced this issue Jun 14, 2023
Allow property based configuration of load balancing strategies. Do so
by expanding on the AxonServerConfiguration, and using this information
in the EventProcessorControlService

#2453
smcvb added a commit that referenced this issue Jun 14, 2023
Add property prefix to align with Axon Framework's processor properties

#2453
@smcvb smcvb added Status: Resolved Use to signal that work on this issue is done. and removed Status: In Progress Use to signal this issue is actively worked on. labels Jun 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Priority 2: Should High priority. Ideally, these issues are part of the release they’re assigned to. Status: Resolved Use to signal that work on this issue is done. Type: Feature Use to signal an issue is completely new to the project.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants