New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better task balancing #1482

Merged
merged 73 commits into from Jun 8, 2017
Commits
Jump to file or symbol
Failed to load files and symbols.
+5 鈭7
Diff settings

Always

Just for now

Viewing a subset of changes. View all

fix test bugs

  • Loading branch information...
darcatron committed May 1, 2017
commit 745d23ea26d21730a6391ae8bb3765733542c393
@@ -149,7 +149,7 @@ public SingularityMesosOfferScheduler(MesosConfiguration mesosConfiguration,
double score = score(offerHolder, stateCache, tasksPerOfferPerRequest, taskRequestHolder, getSlaveUsage(currentSlaveUsages, offerHolder.getOffer().getSlaveId().getValue()));

This comment has been minimized.

@ssalinas

ssalinas Apr 20, 2017

Member

for clarity, maybe something like 'hostScore' here? The score is for the particular slave, not necessarily about the offer

@ssalinas

ssalinas Apr 20, 2017

Member

for clarity, maybe something like 'hostScore' here? The score is for the particular slave, not necessarily about the offer

This comment has been minimized.

@darcatron

darcatron Apr 20, 2017

Contributor

I'm not sure about the naming here. We do look at the slave's utilization to score the offer, but we are still scoring the offer itself since offers aren't uniquely 1:1 for a slave (e.g. 2 offers for the same slave).

The slave utilization weight will be the same for all offers on the same slave, but the offer resources will be different per offer. So, it seems to me that we're scoring the offer in this class rather than the slave itself

@darcatron

darcatron Apr 20, 2017

Contributor

I'm not sure about the naming here. We do look at the slave's utilization to score the offer, but we are still scoring the offer itself since offers aren't uniquely 1:1 for a slave (e.g. 2 offers for the same slave).

The slave utilization weight will be the same for all offers on the same slave, but the offer resources will be different per offer. So, it seems to me that we're scoring the offer in this class rather than the slave itself

LOG.trace("Offer {} with resources {} scored {} for Task {}", offerHolder.getOffer(), offerHolder.getCurrentResources(), score, taskRequestHolder.getTaskRequest().getPendingTask().getPendingTaskId().getId());
if (score >= minScore) {
if (score != 0 && score >= minScore) {
// todo: can short circuit here if score is high enough (>= .9)
scorePerOffer.put(offerHolder, score);

This comment has been minimized.

@darcatron

darcatron Mar 30, 2017

Contributor

Thought we might want to have a value that's definitely good enough to just accept instead of continue evaluating

@darcatron

darcatron Mar 30, 2017

Contributor

Thought we might want to have a value that's definitely good enough to just accept instead of continue evaluating

}
@@ -340,10 +340,7 @@ private long millisPastDue(SingularityTaskRequest taskRequest, long now) {
return Math.max(now - taskRequest.getPendingTask().getPendingTaskId().getNextRunAt(), 0);
}
private SingularityTask acceptTask(SingularityOfferHolder offerHolder,
SingularitySchedulerStateCache stateCache,
Map<String, Map<String, Integer>> tasksPerOfferPerRequest,
SingularityTaskRequestHolder taskRequestHolder) {
private SingularityTask acceptTask(SingularityOfferHolder offerHolder, SingularitySchedulerStateCache stateCache, Map<String, Map<String, Integer>> tasksPerOfferPerRequest, SingularityTaskRequestHolder taskRequestHolder) {
final SingularityTaskRequest taskRequest = taskRequestHolder.getTaskRequest();
final SingularityTask task = mesosTaskBuilder.buildTask(offerHolder.getOffer(), offerHolder.getCurrentResources(), taskRequest, taskRequestHolder.getTaskResources(), taskRequestHolder.getExecutorResources());
@@ -228,7 +228,8 @@ private boolean isSlaveAttributesMatch(SingularityOfferHolder offer, Singularity
if ((taskRequest.getRequest().getRequiredSlaveAttributes().isPresent() && !taskRequest.getRequest().getRequiredSlaveAttributes().get().isEmpty())
|| (taskRequest.getRequest().getAllowedSlaveAttributes().isPresent() && !taskRequest.getRequest().getAllowedSlaveAttributes().get().isEmpty())) {
Map<String, String> mergedAttributes = taskRequest.getRequest().getRequiredSlaveAttributes().or(new HashMap<>());
Map<String, String> mergedAttributes = new HashMap<>();
mergedAttributes.putAll(taskRequest.getRequest().getRequiredSlaveAttributes().or(new HashMap<>()));
mergedAttributes.putAll(taskRequest.getRequest().getAllowedSlaveAttributes().or(new HashMap<>()));
if (!slaveAndRackHelper.hasRequiredAttributes(mergedAttributes, reservedSlaveAttributes)) {
LOG.trace("Slaves with attributes {} are reserved for matching tasks. Task with attributes {} does not match", reservedSlaveAttributes, taskRequest.getRequest().getRequiredSlaveAttributes().or(Collections.emptyMap()));
@@ -197,7 +197,7 @@ public void teardown() throws Exception {
@Before
public final void setupDriver() throws Exception {
configuration.setMinOfferScore(0.01); // disable task balancing
configuration.setMinOfferScore(0); // disable task balancing
driver = driverSupplier.get().get();
ProTip! Use n and p to navigate between commits in a pull request.