Conversation
|
|
||
| private static final String DEFAULT_BEFORE_EACH_BATCH_PREFIX = " "; | ||
|
|
||
|
|
There was a problem hiding this comment.
please delete this line
| return result.toString(); | ||
| } | ||
|
|
||
| private String handeObjectInMap(Object value) { |
There was a problem hiding this comment.
| private String handeObjectInMap(Object value) { | |
| private String handleObjectInMap(Object value) { |
seems like a misprint
| TERADATA("\""), | ||
| VERTICA("\"", Casing.UNCHANGED); | ||
| VERTICA("\"", Casing.UNCHANGED), | ||
| SPARKSQL("`", |
There was a problem hiding this comment.
It's better to keep records here sorted alphabetically
snuyanzin
left a comment
There was a problem hiding this comment.
Thanks for the contribution
in general lgtm, I left some comments
|
Thanks @snuyanzin for the review 🙏 I also have some questions ..
|
Codecov ReportAttention: Patch coverage is
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## main #1261 +/- ##
============================================
- Coverage 92.35% 91.91% -0.44%
- Complexity 2821 3079 +258
============================================
Files 292 310 +18
Lines 5609 6026 +417
Branches 599 631 +32
============================================
+ Hits 5180 5539 +359
- Misses 275 325 +50
- Partials 154 162 +8 ☔ View full report in Codecov by Sentry. |
|
Thanks for addressing feedback and for the valuable contribution |
yep, need a backport PR for that
depending on how large documentation update you want to make, will it be ok to add as a subsection of or |
Add Spark SQL support.
See "INSERT INTO" spec https://spark.apache.org/docs/3.2.1/sql-ref-syntax-dml-insert-into.html
There are some issues with existing design in order to support all Spark types.
Spark SQL has 3 complex data types:
Insertions look like this
So notable design changes are
java.util.Mapwith Spark SQL MapI tested on latest Databricks runtime this generated SQL and works well 👍