New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
spark_read_jdbc returns columns with quotes instead of backticks #3196
Comments
hello. How were you able to implement this connectivity between sparklyr and bigquery. I keep getting the following error: |
Hello again, I found that guava jar versions used by spark and simba bigquery connector are different (14.01 and 31.1 respectively) so I had to replace that of spark. I've also since had to replace or add a few other jars to the spark jars folder. Now I have the following error message:
I've read a few blogs where they've suggested "implementing a custom JDBC dialect" to solve this, unfortunately, I have no idea how that is to be done, not to mention the fact that my coding ability is limited to R, some Python and SQL. I wonder if you have a more straight forward work around to this or how I can go about implementing the custom JDBC dialect. |
spark_read_jdbc
returns columns with quotes instead of backticks for query. This causes the results to return as literals instead of the data.Generates (as an example):
SELECT "COLUMN1", "COLUMN2", "COLUMN3" FROM project.dataset.tbl_nm where "COLUMN1" >= 1;
This results in the error that: "No matching signature for operator >= for argument types: STRING, INT64"
This should instead be:
SELECT `COLUMN1`, `COLUMN2`, `COLUMN3` FROM project.dataset.tbl_nm where `COLUMN1` >= 1;
The text was updated successfully, but these errors were encountered: