New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Poll API, health management for recorders, backend daemon for reporting load, aggregation window management, work assignment scheduling #47
Changes from 84 commits
e17d4c4
ce400f4
10d904e
bc48ebb
746f76e
d2777f2
c730e9b
e9c6539
abd2539
13832dd
b871ba6
e4970ad
b29bbe4
5904319
4bffaf1
a849797
106f949
af64c73
58870db
91a3d74
b93a114
4c573ad
08ba7dd
0303532
7295ef5
3a878f7
9f33a4e
b2c1bb7
47dc7b7
3bd73f3
fca8684
2db9090
36e25dc
a8fb922
7bdea7c
f9fbd28
8a504d8
7e36028
b7507ba
435385d
1c8bacf
74e650e
70e3c0b
2aa1352
a981f6c
212ceca
0ef1eff
04ecbb3
2894ab1
d382a31
fef0ec9
b78aa5d
feaaff5
e7060e5
2179443
a3fafd2
58556b6
58d1cab
19b1989
586f04e
ad9dc6b
5b72c59
590a9e7
8b88f2f
a23385f
60d8e54
88d96e7
e10c931
1a6b7d3
9d85a50
3848013
464f0b1
76ee26b
31b274c
a28a452
4fac1f8
46e8d2a
b0bbee3
62d1d81
c06b54d
417b771
e3199ea
7fda302
d77e4e2
14c6b77
668398a
a525979
a5eb038
650134a
da0053a
91da467
8ae803b
eaa57eb
4082bc7
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -2,16 +2,19 @@ | |
|
||
import com.google.common.base.Preconditions; | ||
import fk.prof.backend.deployer.VerticleDeployer; | ||
import fk.prof.backend.deployer.impl.BackendHttpVerticleDeployer; | ||
import fk.prof.backend.deployer.impl.LeaderElectionParticipatorVerticleDeployer; | ||
import fk.prof.backend.deployer.impl.LeaderElectionWatcherVerticleDeployer; | ||
import fk.prof.backend.deployer.impl.LeaderHttpVerticleDeployer; | ||
import fk.prof.backend.deployer.impl.*; | ||
import fk.prof.backend.leader.election.LeaderElectedTask; | ||
import fk.prof.backend.model.aggregation.AggregationWindowLookupStore; | ||
import fk.prof.backend.model.assignment.ProcessGroupAssociationStore; | ||
import fk.prof.backend.model.assignment.SimultaneousWorkAssignmentCounter; | ||
import fk.prof.backend.model.assignment.impl.ProcessGroupAssociationStoreImpl; | ||
import fk.prof.backend.model.assignment.impl.SimultaneousWorkAssignmentCounterImpl; | ||
import fk.prof.backend.model.association.BackendAssociationStore; | ||
import fk.prof.backend.model.association.ProcessGroupCountBasedBackendComparator; | ||
import fk.prof.backend.model.association.impl.ZookeeperBasedBackendAssociationStore; | ||
import fk.prof.backend.model.election.impl.InMemoryLeaderStore; | ||
import fk.prof.backend.service.ProfileWorkService; | ||
import fk.prof.backend.model.aggregation.impl.AggregationWindowLookupStoreImpl; | ||
import fk.prof.backend.model.policy.PolicyStore; | ||
import io.vertx.core.*; | ||
import io.vertx.core.json.JsonObject; | ||
import io.vertx.core.logging.Logger; | ||
|
@@ -23,8 +26,10 @@ | |
import org.apache.curator.framework.CuratorFrameworkFactory; | ||
import org.apache.curator.retry.ExponentialBackoffRetry; | ||
|
||
import java.util.ArrayList; | ||
import java.util.List; | ||
import java.util.concurrent.TimeUnit; | ||
import java.util.stream.Collectors; | ||
|
||
/** | ||
* TODO: Deployment process is liable to changes later | ||
|
@@ -76,16 +81,24 @@ public Future<Void> close() { | |
public Future<Void> launch() { | ||
Future result = Future.future(); | ||
InMemoryLeaderStore leaderStore = new InMemoryLeaderStore(configManager.getIPAddress()); | ||
ProfileWorkService profileWorkService = new ProfileWorkService(); | ||
|
||
VerticleDeployer backendHttpVerticleDeployer = new BackendHttpVerticleDeployer(vertx, configManager, leaderStore, profileWorkService); | ||
backendHttpVerticleDeployer.deploy().setHandler(backendDeployResult -> { | ||
AggregationWindowLookupStore aggregationWindowLookupStore = new AggregationWindowLookupStoreImpl(); | ||
ProcessGroupAssociationStore processGroupAssociationStore = new ProcessGroupAssociationStoreImpl(configManager.getRecorderDefunctThresholdInSeconds()); | ||
SimultaneousWorkAssignmentCounter simultaneousWorkAssignmentCounter = new SimultaneousWorkAssignmentCounterImpl(configManager.getMaxSimultaneousProfiles()); | ||
|
||
VerticleDeployer backendHttpVerticleDeployer = new BackendHttpVerticleDeployer(vertx, configManager, leaderStore, aggregationWindowLookupStore, processGroupAssociationStore); | ||
VerticleDeployer backendDaemonVerticleDeployer = new BackendDaemonVerticleDeployer(vertx, configManager, leaderStore, processGroupAssociationStore, aggregationWindowLookupStore, simultaneousWorkAssignmentCounter); | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We should have a validation that ensures that only one thread will ever run backend-daemon. Basically 1 vertical. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Validation is done inside the deployer for backend daemon. Fixed in commit 668398a |
||
CompositeFuture backendDeploymentFuture = CompositeFuture.all(backendHttpVerticleDeployer.deploy(), backendDaemonVerticleDeployer.deploy()); | ||
backendDeploymentFuture.setHandler(backendDeployResult -> { | ||
if (backendDeployResult.succeeded()) { | ||
try { | ||
List<String> backendDeployments = backendDeployResult.result().list(); | ||
List<String> backendDeployments = backendDeployResult.result().list().stream() | ||
.flatMap(fut -> ((CompositeFuture)fut).list().stream()) | ||
.map(deployment -> (String)deployment) | ||
.collect(Collectors.toList()); | ||
|
||
BackendAssociationStore backendAssociationStore = createBackendAssociationStore(vertx, curatorClient); | ||
VerticleDeployer leaderHttpVerticleDeployer = new LeaderHttpVerticleDeployer(vertx, configManager, backendAssociationStore); | ||
PolicyStore policyStore = new PolicyStore(); | ||
VerticleDeployer leaderHttpVerticleDeployer = new LeaderHttpVerticleDeployer(vertx, configManager, backendAssociationStore, policyStore); | ||
Runnable leaderElectedTask = createLeaderElectedTask(vertx, leaderHttpVerticleDeployer, backendDeployments); | ||
|
||
VerticleDeployer leaderElectionParticipatorVerticleDeployer = new LeaderElectionParticipatorVerticleDeployer( | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For all DTOs, we should consider re-using the object for serialization.
They expose a "clear" mechanism to do this. One can clear the DTO and start afresh without any additional GC pressure.
Something worth thinking about.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You mean reusing the builder object right? Also, clearing the same builder object can work only where concurrent access cannot happen. Above is a candidate, agreed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, I meant both builder and DTO, but looks like neither is possible. The API is extremely stupid, it seems, read: https://groups.google.com/forum/#!topic/protobuf/b0gS4wpjuIo and https://groups.google.com/forum/#!topic/protobuf/No9bBRh3Wp0
It seems they wanted to support some "optimizations" by being GC unfriendly, which they didn't get right either. Dumbness rules!