Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix the bug when sql workloads don't run on Spark 2.0 #345

Merged
merged 2 commits into from Nov 21, 2016

Conversation

gcz2022
Copy link
Contributor

@gcz2022 gcz2022 commented Nov 1, 2016

1.Change 'text' args back to 'sequence'
2.use "SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'" to replace the former, which is constitutionally the same as former "DELEMITED FIELDS TERMINITED BY ','" to avoid the sql error in Spark 2.0(caused by PR apache/spark#13068)
3.Upgrade Hive version to 0.14 to gain OpenCSVSerde(Since Hive 0.14) support

@gcz2022 gcz2022 changed the base branch from 6.0 to master November 9, 2016 05:47
@carsonwang carsonwang merged commit 4890747 into Intel-bigdata:master Nov 21, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants