New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CARBONDATA-3360]fix NullPointerException in delete and clean files operation #3191
Conversation
Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/2988/ |
Build Success with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/3222/ |
Build Success with Spark 2.3.2, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/11252/ |
@ravipesala please review |
Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/2992/ |
Build Success with Spark 2.3.2, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/11256/ |
Build Success with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/3226/ |
better to avoid to create this invalid file, find the root cause and fix it. |
@QiangCai, the root cause is , this failure happened when there is no space left to append and write the updatetablestatus file in hdfs during delete table operation. Basically, we clear all the invalid files either during clean files or if it is load, then before starting the load. So i think, instead of deleting the file when the operation writing this file fails, we can do it in clean files operation. what is your opinion on this? |
if (fileTimestamp == null) { | ||
String tableUpdateStatusFilename = invalidFile.getName(); | ||
if (tableUpdateStatusFilename.endsWith(".write")) { | ||
long tableUpdateStatusFileTimeStamp = Long.parseLong(tableUpdateStatusFilename |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please use
org.apache.carbondata.core.util.path.CarbonTablePath.DataFileUtil#getTimeStampFromFileName
to get the timestamp from the file name
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
invalidFile.getName().indexOf("."))); | ||
if (isMaxQueryTimeoutExceeded(tableUpdateStatusFileTimeStamp)) { | ||
try { | ||
LOGGER.info("deleting the invalid .write file : " + invalidFile.getName()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please check if below method can be used because the code is same
org.apache.carbondata.core.mutate.CarbonUpdateUtil#compareTimestampsAndDelete
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/3004/ |
Build Success with Spark 2.3.2, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/11268/ |
@QiangCai please review and merge |
retest this please |
Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/3048/ |
Build Failed with Spark 2.3.2, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/11312/ |
Build Success with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/3246/ |
retest this please |
Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/3053/ |
Build Success with Spark 2.3.2, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/11317/ |
Build Success with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/3252/ |
LGTM |
@ravipesala can you please check and merge |
Build Failed with Spark 2.3.2, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/11391/ |
Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/3126/ |
Build Success with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/3325/ |
…peration Problem: when delete is failed due to hdfs quota exceeded or disk space is full, then tableUpdateStatus.write will be present in store. So after that if clean files operation is done, we were trying to assign null to primitive type long, which will throw runtime exception, and .write file will not be deleted, since we consider it as invalid file. Solution: if .write file is present, then we do not fail clean files, we check for max query timeout for tableUpdateStatus.write file and then delete these .write files for any clean files operation after that. This closes #3191
…peration Problem: when delete is failed due to hdfs quota exceeded or disk space is full, then tableUpdateStatus.write will be present in store. So after that if clean files operation is done, we were trying to assign null to primitive type long, which will throw runtime exception, and .write file will not be deleted, since we consider it as invalid file. Solution: if .write file is present, then we do not fail clean files, we check for max query timeout for tableUpdateStatus.write file and then delete these .write files for any clean files operation after that. This closes apache#3191
Problem:
when delete is failed due to hdfs quota exceeded or disk space is full, then tableUpdateStatus.write will be present in store.
So after that if clean files operation is done, we were trying to assign null to primitive type long, which will throw runtime exception, and .write file will not be deleted, since we consider it as invalid file.
Solution:
if
.write
file is present, then we do not fail clean files, we check for max query timeout fortableUpdateStatus.write
file and then delete these .write files for any clean files operation after that.Be sure to do all of the following checklist to help us incorporate
your contribution quickly and easily:
Any interfaces changed?
NA
Any backward compatibility impacted?
NA
Document update required?
NA
Testing done
tested in three node cluster and verified this scenario.
Please provide details on
- Whether new unit test cases have been added or why no new tests are required?
- How it is tested? Please attach test report.
- Is it a performance related change? Please attach the performance test report.
- Any additional information to help reviewers in testing this change.
For large changes, please consider breaking it into sub-tasks under an umbrella JIRA.