Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changes for supporting unsgined tinyint values #2044

Closed
wants to merge 2 commits into from
Closed

Changes for supporting unsgined tinyint values #2044

wants to merge 2 commits into from

Conversation

vamossagar12
Copy link
Collaborator

Based upon the theory provided in the official jdbc documentation, I have added a condition to include both Short and Byte as INT8 types in ConnectSchema.
(https://docs.oracle.com/javase/6/docs/technotes/guides/jdbc/getstart/mapping.html)
Here's the doc:

8.3.5 SMALLINT

The JDBC type SMALLINT represents a 16-bit signed integer value between -32768 and 32767.

The corresponding SQL type, SMALLINT, is defined in SQL-92 and is supported by all the major databases. The SQL-92 standard leaves the precision of SMALLINT up to the implementation, but in practice, all the major databases support at least 16 bits.

The recommended Java mapping for the JDBC SMALLINT type is as a Java short.

This would allow kafka-connect-jdbc to allow both signed and unsigned values. Currently it fails for unsigned values(i.e values > 127). This is specifically needed when fetching unsigned tinyint values from dbs using kafka-connect-jdbc. I have sent the PR to them and schema-registry as well to address the issue.

@vamossagar12
Copy link
Collaborator Author

Can I know the next steps here? The jenkins build has failed but the test case that seems to have failed, looking at the jenkins logs, is not related to my change. It would be helpful if someone can tell the same.

@kkonstantine
Copy link
Contributor

The case for this quite old PR doesn't seem very strong at the moment. The issue with the data type width has been resolved in the respective connector by inspecting the metadata of columns that are using this type. This approach seems flexible enough and does not require changes in Connect's API.

I'll go ahead and close this issue without merging due to inactivity and because a practical workaround is available. Feel free to return to the subject if we need to, by also creating an issue in https://issues.apache.org/jira/projects/KAFKA/issues

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
3 participants