Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CARBONDATA-2016] Exception displays while executing compaction with alter query #1839

Closed
wants to merge 1 commit into from

Conversation

anubhav100
Copy link
Contributor

Root Cause
When we apply the alter table command to add column with default value it is storing it as long object,it is wrongly written in restructure util we should get the value as the same type
as that of datatype of columnschema in restructure util earlier in master branch in restuctureutil class if our data type is long or short or int we are always returning back a long object which is wong due to same reason compaction was failing
if it was applied after alter table add columns command with default value because in sortdatarows there was mismatch between data type and its corresponding value

Testing:
1.mvn clean install is passing
2.added new test case for same

@anubhav100
Copy link
Contributor Author

@jackylk i accidently deleted my old branch i have created this new branch for same issue please merge this one

@CarbonDataQA
Copy link

Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3000/

@anubhav100
Copy link
Contributor Author

retest this please

@CarbonDataQA
Copy link

Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1770/

@anubhav100
Copy link
Contributor Author

retest this please

@CarbonDataQA
Copy link

Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1774/

@anubhav100
Copy link
Contributor Author

retest this please

@CarbonDataQA
Copy link

Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1775/

@CarbonDataQA
Copy link

Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3005/

@anubhav100
Copy link
Contributor Author

@jackylk please review

@CarbonDataQA
Copy link

Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3121/

@CarbonDataQA
Copy link

Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1889/

@ravipesala
Copy link
Contributor

SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3100/

"alter table customer1 add columns (longfield bigint) TBLPROPERTIES ('DEFAULT.VALUE.longfield'='10')")

sql("alter table customer1 compact 'minor' ").show()
assert(true)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Remove try catch block..its not required..in case of any exception anyways the test case will fail.
  2. In the add column DDL's. give the default value as MAX value of the corresponding datatype so that the same is validated after compaction in the select query validation.
  3. After compaction match the newly added column values as select query using checkAnswer.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@@ -360,6 +393,8 @@ class HorizontalCompactionTestCase extends QueryTest with BeforeAndAfterAll {
CarbonProperties.getInstance()
.addProperty(CarbonCommonConstants.isHorizontalCompactionEnabled , "true")
sql("""drop table if exists t_carbn01""")
sql("""drop table if exists customer1""")

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove extra line

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@manishgupta88 done you can review

Reason:
When we apply the alter table command to add column with default value it is always storing it as long object for all measures,it is wrongly written in restructure util we should return the value as the same type as that of the measure,it was causing the compaction to fail with class cast exception because the data type and its corresponding value does not have same data type

Solution:Correct the wrong logic in restructure util the type of returning value object should be same as that of measure
@CarbonDataQA
Copy link

Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1979/

@CarbonDataQA
Copy link

Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3213/

@manishgupta88
Copy link
Contributor

LGTM

@asfgit asfgit closed this in a597c2f Jan 29, 2018
@ravipesala
Copy link
Contributor

SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3174/

anubhav100 added a commit to anubhav100/incubator-carbondata that referenced this pull request Jun 22, 2018
…alter query

Reason:
When we apply the alter table command to add column with default value it is always storing it as long object for all measures,it is wrongly written
in restructure util we should return the value as the same type as that of the measure,it was causing the compaction to fail with class cast exception
because the data type and its corresponding value does not have same data type

Solution: Correct the wrong logic in restructure util the type of returning value object should be same as that of measure

This closes apache#1839
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants