Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CARBONDATA-3360]fix NullPointerException in delete and clean files operation #3191

Closed
wants to merge 2 commits into from

Conversation

akashrn5
Copy link
Contributor

Problem:

when delete is failed due to hdfs quota exceeded or disk space is full, then tableUpdateStatus.write will be present in store.
So after that if clean files operation is done, we were trying to assign null to primitive type long, which will throw runtime exception, and .write file will not be deleted, since we consider it as invalid file.

Solution:

if .write file is present, then we do not fail clean files, we check for max query timeout for tableUpdateStatus.write file and then delete these .write files for any clean files operation after that.

Be sure to do all of the following checklist to help us incorporate
your contribution quickly and easily:

  • Any interfaces changed?
    NA

  • Any backward compatibility impacted?
    NA

  • Document update required?
    NA

  • Testing done
    tested in three node cluster and verified this scenario.
    Please provide details on
    - Whether new unit test cases have been added or why no new tests are required?
    - How it is tested? Please attach test report.
    - Is it a performance related change? Please attach the performance test report.
    - Any additional information to help reviewers in testing this change.

  • For large changes, please consider breaking it into sub-tasks under an umbrella JIRA.

@CarbonDataQA
Copy link

Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/2988/

@CarbonDataQA
Copy link

Build Success with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/3222/

@CarbonDataQA
Copy link

Build Success with Spark 2.3.2, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/11252/

@akashrn5
Copy link
Contributor Author

@ravipesala please review

@CarbonDataQA
Copy link

Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/2992/

@CarbonDataQA
Copy link

Build Success with Spark 2.3.2, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/11256/

@CarbonDataQA
Copy link

Build Success with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/3226/

@QiangCai
Copy link
Contributor

better to avoid to create this invalid file, find the root cause and fix it.

@akashrn5
Copy link
Contributor Author

@QiangCai, the root cause is , this failure happened when there is no space left to append and write the updatetablestatus file in hdfs during delete table operation. Basically, we clear all the invalid files either during clean files or if it is load, then before starting the load. So i think, instead of deleting the file when the operation writing this file fails, we can do it in clean files operation. what is your opinion on this?

if (fileTimestamp == null) {
String tableUpdateStatusFilename = invalidFile.getName();
if (tableUpdateStatusFilename.endsWith(".write")) {
long tableUpdateStatusFileTimeStamp = Long.parseLong(tableUpdateStatusFilename
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please use
org.apache.carbondata.core.util.path.CarbonTablePath.DataFileUtil#getTimeStampFromFileName
to get the timestamp from the file name

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

invalidFile.getName().indexOf(".")));
if (isMaxQueryTimeoutExceeded(tableUpdateStatusFileTimeStamp)) {
try {
LOGGER.info("deleting the invalid .write file : " + invalidFile.getName());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please check if below method can be used because the code is same
org.apache.carbondata.core.mutate.CarbonUpdateUtil#compareTimestampsAndDelete

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@CarbonDataQA
Copy link

Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/3004/

@CarbonDataQA
Copy link

Build Success with Spark 2.3.2, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/11268/

@akashrn5
Copy link
Contributor Author

akashrn5 commented May 1, 2019

@QiangCai please review and merge

@akashrn5
Copy link
Contributor Author

akashrn5 commented May 1, 2019

retest this please

@CarbonDataQA
Copy link

Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/3048/

@CarbonDataQA
Copy link

Build Failed with Spark 2.3.2, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/11312/

@CarbonDataQA
Copy link

Build Success with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/3246/

@akashrn5
Copy link
Contributor Author

akashrn5 commented May 2, 2019

retest this please

@CarbonDataQA
Copy link

Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/3053/

@CarbonDataQA
Copy link

Build Success with Spark 2.3.2, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/11317/

@CarbonDataQA
Copy link

Build Success with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/3252/

@ravipesala
Copy link
Contributor

LGTM

@akashrn5
Copy link
Contributor Author

akashrn5 commented May 7, 2019

@ravipesala can you please check and merge

@CarbonDataQA
Copy link

Build Failed with Spark 2.3.2, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/11391/

@CarbonDataQA
Copy link

Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/3126/

@CarbonDataQA
Copy link

Build Success with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/3325/

@asfgit asfgit closed this in 3268a45 May 8, 2019
asfgit pushed a commit that referenced this pull request May 16, 2019
…peration

Problem:
when delete is failed due to hdfs quota exceeded or disk space is full, then tableUpdateStatus.write will be present in store.
So after that if clean files operation is done, we were trying to assign null to primitive type long, which will throw runtime exception, and .write file will not be deleted, since we consider it as invalid file.

Solution:
if .write file is present, then we do not fail clean files, we check for max query timeout for tableUpdateStatus.write file and then delete these .write files for any clean files operation after that.

This closes #3191
qiuchenjian pushed a commit to qiuchenjian/carbondata that referenced this pull request Jun 14, 2019
…peration

Problem:
when delete is failed due to hdfs quota exceeded or disk space is full, then tableUpdateStatus.write will be present in store.
So after that if clean files operation is done, we were trying to assign null to primitive type long, which will throw runtime exception, and .write file will not be deleted, since we consider it as invalid file.

Solution:
if .write file is present, then we do not fail clean files, we check for max query timeout for tableUpdateStatus.write file and then delete these .write files for any clean files operation after that.

This closes apache#3191
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants