Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CARBONDATA-3132]Correct the task disrtibution in case of compaction when the actual block nodes and active nodes are different #2953

Closed
wants to merge 2 commits into from

Conversation

akashrn5
Copy link
Contributor

Why This PR?

There is an unequal distribution of tasks during compaction
ex: When the load is done using replication factor 2 and all nodes are active and during compaction one node is down, basically it is not active executor, so the task distribution should take care to distribute the tasks equally among all the active executors instead of giving more tasks to single executor and less to other executor. But sometimes the unequal distribution happens and the compaction becomes sow.

Solution

Currently we are not getting active executors before the node block mapping and sending the list of active executors as null, which will lead to the above problem sometimes. so get the active executors and send for node block mapping logic which will handle to distribute equally.

Be sure to do all of the following checklist to help us incorporate
your contribution quickly and easily:

  • Any interfaces changed?
    NA

  • Any backward compatibility impacted?
    NA

  • Document update required?
    NA

  • Testing done
    tested using 3 node and 6 node cluster
    Please provide details on
    - Whether new unit test cases have been added or why no new tests are required?
    - How it is tested? Please attach test report.
    - Is it a performance related change? Please attach the performance test report.
    - Any additional information to help reviewers in testing this change.

  • For large changes, please consider breaking it into sub-tasks under an umbrella JIRA.
    NA

@@ -205,7 +206,7 @@ public void testSummaryOutputAll() {
expectedOutput = buildLines(
"## version Details",
"written_by Version ",
"TestUtil 1.6.0-SNAPSHOT ");
"TestUtil "+ CarbonVersionConstants.CARBONDATA_VERSION+" ");
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed the harded value here

@CarbonDataQA
Copy link

Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1544/

@@ -536,7 +537,8 @@ public static Dictionary getDictionary(AbsoluteTableIdentifier absoluteTableIden
*/
public static Map<String, List<Distributable>> nodeBlockMapping(List<Distributable> blockInfos) {
// -1 if number of nodes has to be decided based on block location information
return nodeBlockMapping(blockInfos, -1);
return nodeBlockMapping(blockInfos, -1, null,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the choice of BlockAssignmentStrategy should be judged by CarbonProperties.getInstance().isLoadSkewedDataOptimizationEnabled() ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, in compaction case BlockAssignmentStrategy.BLOCK_NUM_FIRST is default , same as before

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here i just did the refactoring

@CarbonDataQA
Copy link

Build Success with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1756/

@CarbonDataQA
Copy link

Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1546/

@CarbonDataQA
Copy link

Build Success with Spark 2.3.1, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/9806/

@CarbonDataQA
Copy link

Build Success with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1758/

val activeNodes = DistributionUtil
.ensureExecutorsAndGetNodeList(taskInfoList.asScala, sparkContext)

val nodeBlockMap = CarbonLoaderUtil.nodeBlockMapping(taskInfoList, -1, activeNodes.asJava)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Below code is redundant, please remove it

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@CarbonDataQA
Copy link

Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1554/

@CarbonDataQA
Copy link

Build Failed with Spark 2.3.1, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/9813/

@CarbonDataQA
Copy link

Build Failed with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1765/

@CarbonDataQA
Copy link

Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1555/

@CarbonDataQA
Copy link

Build Success with Spark 2.3.1, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/9814/

@CarbonDataQA
Copy link

Build Success with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1766/

@ravipesala
Copy link
Contributor

LGTM

@asfgit asfgit closed this in eeeaf50 Nov 28, 2018
asfgit pushed a commit that referenced this pull request Nov 30, 2018
…when the actual block nodes and active nodes are different

Why This PR?
There is an unequal distribution of tasks during compaction
ex: When the load is done using replication factor 2 and all nodes are active and during compaction one node is down, basically it is not active executor, so the task distribution should take care to distribute the tasks equally among all the active executors instead of giving more tasks to single executor and less to other executor. But sometimes the unequal distribution happens and the compaction becomes sow.

Solution
Currently we are not getting active executors before the node block mapping and sending the list of active executors as null, which will lead to the above problem sometimes. so get the active executors and send for node block mapping logic which will handle to distribute equally.

This closes #2953
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants