Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update spark sql and jackson databind #152

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

lovromazgon
Copy link
Contributor

Seems like these two updates depend on each other, let's see if this helps.

The PR combines these two PRs:

dependabot bot added 2 commits June 25, 2024 11:40
Bumps [com.fasterxml.jackson.core:jackson-databind](https://github.com/FasterXML/jackson) from 2.15.3 to 2.17.1.
- [Commits](https://github.com/FasterXML/jackson/commits)

---
updated-dependencies:
- dependency-name: com.fasterxml.jackson.core:jackson-databind
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Bumps org.apache.spark:spark-sql_2.13 from 3.5.1 to 4.0.0-preview1.

---
updated-dependencies:
- dependency-name: org.apache.spark:spark-sql_2.13
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
@lovromazgon
Copy link
Contributor Author

There are three failing tests now:

    io.grpc.StatusException: INTERNAL: couldn't write record: [CAST_INVALID_INPUT] The value '123.0' of the type "STRING" cannot be cast to "BIGINT" because it is malformed. Correct the value as per the syntax, or change its target type. Use `try_cast` to tolerate malformed input and return NULL instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this error. SQLSTATE: 22018
== SQL (line 1, position 59) ==
...efaultDestinationStreamIT where integer_field = '123.0'
                                   ^^^^^^^^^^^^^^^^^^^^^^^

);
-> at io.conduit.SparkDestinationStream.onNext(SparkDestinationStream.java:78)
    io.grpc.StatusException: INTERNAL: couldn't write record: [CAST_INVALID_INPUT] The value '105 OR 1=1' of the type "STRING" cannot be cast to "BIGINT" because it is malformed. Correct the value as per the syntax, or change its target type. Use `try_cast` to tolerate malformed input and return NULL instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this error. SQLSTATE: 22018
== SQL (line 1, position 59) ==
...efaultDestinationStreamIT where integer_field = '105 OR 1=1'
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

);
-> at io.conduit.SparkDestinationStream.onNext(SparkDestinationStream.java:78)
    io.grpc.StatusException: INTERNAL: couldn't write record: [CAST_INVALID_INPUT] The value '12.0' of the type "STRING" cannot be cast to "BIGINT" because it is malformed. Correct the value as per the syntax, or change its target type. Use `try_cast` to tolerate malformed input and return NULL instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this error. SQLSTATE: 22018
== SQL (line 1, position 59) ==
...efaultDestinationStreamIT where integer_field = '12.0'
                                   ^^^^^^^^^^^^^^^^^^^^^^

);
-> at io.conduit.SparkDestinationStream.onNext(SparkDestinationStream.java:78)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant