-
Notifications
You must be signed in to change notification settings - Fork 28.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-13232][YARN] Fix executor node label #11129
Conversation
190230b
to
28f1491
Compare
Jenkins, test this please |
Test build #51369 has finished for PR 11129 at commit
|
constructor.newInstance(resource, nodes, racks, RM_REQUEST_PRIORITY, true: java.lang.Boolean, | ||
labelExpression.orNull) | ||
labelExp) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From my understanding, currently in your implementation if nodes
or racks
is not empty, label expression will not be worked even it is explicitly set through configuration.
IMO I would choose to set nodes
and racks
to null if label expression is configured, otherwise user will be confused why explicitly setting lab expression is not worked.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I had this idea before, but imo if totally disregard the data locality may cause a lot of network overhead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Normally if user set this label expression configuration, they want the label-based scheduling obviously. But in your implementation you silently disable this, this will make user confuse. Also since label-based scheduling cannot be worked with locality preferences in YARN side (I think it is intentionally), it would be better to ignore the locality information here.
Any further updates on it? CC @sryza about this. |
CC @sryza or maybe @steveloughran or @vanzin . I don't speak YARN |
Yarn and labels, joy.
Yes, I have done this. No, I would not recommend it. I only did it for anti-affinity placement, where we needed a guarantee that there'd be only one instance per node, labelled or not. (i.e we wanted containers to be away from each other, rather than near the data). The request validation logic. I think; it failed for a while (SLIDER-1051) until I turned off some of the checks. There's one more corner case: app doesn't ask for labels, but the queue is bonded to a label. I actually don't know what happens to located requests here. |
@steveloughran do you think this is a good or not-good change overall? |
It's actually being fixed right now in Hadoop 2.8, which will take a while to surface. Looking at the patch, all Bibin is doing is cutting that validation check out from the container request submission...it's up to the scheduler how it handles things at that point. Presumably (hopefully) it does the right thing. If the patch goes in to spark, then eventually, there may be benefits in pulling it. If it goes in now, it stops things breaking today. This is a tough call...nobody wants to add another switch for this do they? (now, maximally devious would be to catch the exception and downgrade, but that's both hard to test and inevitably brittle in some way) |
Maybe we could do this in Spark side, though a little complicated but doable. Yes it is hard to test label related things in Spark side, at least we could manually verify it locally. |
It's been a while since discussion died down here; I don't like the current version because it will break things with a fixed YARN. So the options are either implement Steve's suggestion, or not do anything and tell users to not use labels in broken version of YARN. I'm kinda leaning towards the latter but don't really feel strongly. |
Hi @AtkinsChang is it still active? |
But In Hadoop 2.8.2 there still have this problem when request a container with both node label and locality(rack/node). ResourceManager will also checked it, and the related code is as follows: RMAppManager#validateAndCreateResourceRequest -> SchedulerUtils#validateResourceRequest org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.java
private static void validateResourceRequest(ResourceRequest resReq,
Resource maximumResource, QueueInfo queueInfo, RMContext rmContext)
throws InvalidResourceRequestException {
.......
String labelExp = resReq.getNodeLabelExpression();
// we don't allow specify label expression other than resourceName=ANY now
if (!ResourceRequest.ANY.equals(resReq.getResourceName())
&& labelExp != null && !labelExp.trim().isEmpty()) {
throw new InvalidLabelResourceRequestException(
"Invalid resource request, queue=" + queueInfo.getQueueName()
+ " specified node label expression in a "
+ "resource request has resource name = "
+ resReq.getResourceName());
}
// we don't allow specify label expression with more than one node labels now
if (labelExp != null && labelExp.contains("&&")) {
throw new InvalidLabelResourceRequestException(
"Invailid resource request, queue=" + queueInfo.getQueueName()
+ " specified more than one node label "
+ "in a node label expression, node label expression = "
+ labelExp);
}
......
}
} It will cause spark ApplicationMaster failed with InvalidLabelResourceRequestException, so we still need this patch. |
Specify node label for executor will cause Spark not working on Yarn due to Yarn does not support node container request with both label and locality(racks/nodes).
Add test to
YarnAllocatorSuite
to reproduce this situation and changeYarnAllocator#createContainerRequest
to apply node label only to locality free request.