Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CARBONDATA-2817]Thread Leak in Update and in No sort flow #2606

Closed
wants to merge 1 commit into from

Conversation

BJangir
Copy link
Contributor

@BJangir BJangir commented Aug 2, 2018

Issue :- After Update Command is finished , Loading threads are not getting stopped.

Root Cause :-

  1. In Update flow DataLoadExecutor 's close method is not called so all Executors services are not closed.
  2. In Exceptions are not handled property in AFDW class's closeExecutorService() which is cuasing Thread leak if Job is killed from SparkUI..

Solution :-

  1. Add Task Completion Listener and call close method of DataLoadExecutor to it .
  2. Handle Exception in closeExecutor Service so that all Writer steps Threads can be closed.

Be sure to do all of the following checklist to help us incorporate
your contribution quickly and easily:

  • Any interfaces changed?
    NO
  • Any backward compatibility impacted?
    NO
  • Document update required?
    NO
  • Testing done
    Verified in cluster manually .
  • For large changes, please consider breaking it into sub-tasks under an umbrella JIRA.
    NA

@CarbonDataQA
Copy link

Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7748/

@@ -80,11 +82,16 @@

private Map<String, LocalDictionaryGenerator> localDictionaryGeneratorMap;

private List<CarbonFactHandler> carbonFactHandlers;

ExecutorService executorService = null;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

make it private

@CarbonDataQA
Copy link

Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6473/

@BJangir BJangir force-pushed the update_threadLeak branch 2 times, most recently from 93e4e9d to fe7b343 Compare August 3, 2018 12:03
@CarbonDataQA
Copy link

Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7772/

@CarbonDataQA
Copy link

Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6496/

@ravipesala
Copy link
Contributor

SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6137/

@ravipesala
Copy link
Contributor

SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6153/

@ravipesala
Copy link
Contributor

SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6155/

@brijoobopanna
Copy link
Contributor

retest this please

@CarbonDataQA
Copy link

Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7785/

@CarbonDataQA
Copy link

Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6510/

}

@Override protected String getStepName() {
return "Data Writer";
}

private void finish(CarbonFactHandler dataHandler, int iteratorIndex) {
CarbonDataWriterException exception = null;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please handle for closeHandler method as it can also throw exception

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

@kumarvishal09
Copy link
Contributor

@BJangir Please handle thread leak scenario for BatchSortWriter in case of any exception. DataWriterBatchProcessorStepImpl.java

@ravipesala
Copy link
Contributor

SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6193/

@CarbonDataQA
Copy link

Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7817/

@CarbonDataQA
Copy link

Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6541/

@brijoobopanna
Copy link
Contributor

retest this please

@CarbonDataQA
Copy link

Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7830/

@CarbonDataQA
Copy link

Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6555/

if (!rowsNotExist) {
finish(dataHandler, iteratorIndex);
}
} catch (CarbonDataWriterException e) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove this catch block

@ravipesala
Copy link
Contributor

SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6209/

@CarbonDataQA
Copy link

Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7835/

@CarbonDataQA
Copy link

Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6560/

@kumarvishal09
Copy link
Contributor

LGTM

@asfgit asfgit closed this in 7158d52 Aug 8, 2018
asfgit pushed a commit that referenced this pull request Aug 9, 2018
Issue :- After Update Command is finished , Loading threads are not getting stopped.

Root Cause :-

In Update flow DataLoadExecutor 's close method is not called so all Executors services are not closed.
In Exceptions are not handled property in AFDW class's closeExecutorService() which is cuasing Thread leak if Job is killed from SparkUI..
Solution :-

Add Task Completion Listener and call close method of DataLoadExecutor to it .
Handle Exception in closeExecutor Service so that all Writer steps Threads can be closed.

This closes #2606
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants