-
Notifications
You must be signed in to change notification settings - Fork 954
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CC-311: support for Decimal logical type as incrementing column #129
Conversation
@confluentinc/connect for review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some minor questions. This looks good as is unless we want to make the decimal values that are accepted more liberal (though I doubt anyone actually needs more than Long.MAX_VALUE
).
if (decimal.compareTo(LONG_MAX_VALUE_AS_BIGDEC) > 0) { | ||
throw new ConnectException("Decimal value for incrementing column exceeded Long.MAX_VALUE"); | ||
} | ||
if (decimal.scale() != 0) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An alternative to these restrictions would be to just make TimestampIncrementingOffset
use a BigDecimal
and convert the other types (or even make it a composite where you can choose the long
or the BigDecimal
. Any reason not to be a bit more liberal by doing that? Performance is the only reason I can think of.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was hesitant to store BigDecimal
when it's some other int type because wasn't sure if JDBC drivers would handle statement.setBigDecimal()
well for such columns.
BTW, I tried to be more general in a local iteration by storing Comparable
, in fact even including support for String
types like in #97 but abandoned after complexity around default values. I have an idea now of how to handle that though, so I'll give this another go.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Backwards compatibility is another concern to take care of... so we should probably always map to Long
for any of the currently supported int types.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait, this isn't going to get stored in the database, right? This is in the source so we are extracting it. I think the major difference would be that the offsets would change from their integer type to Decimal
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so I'll give this another go.
Er, re-abandoned string support. The issue is, if there is no existing stored offset, you have a null
but no type. While a sensible default value can be determined ("" for string, -1 for numbers..), need type-info at bind-time.
I think happy with the solution for now. We could expand support to all Number
types rather than just upto Long.MAX_VALUE
if there is a request for it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait, this isn't going to get stored in the database, right? This is in the source so we are extracting it.
It gets stored in the offset storage, and used for range queries
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, makes sense.
} | ||
|
||
private boolean isIntegralPrimitiveType(Object incrementingColumnValue) { | ||
return incrementingColumnValue instanceof Long |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably doesn't matter much in practice, but any reason we switched to checking the value rather than the schema?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No good reason :-) Can't recall how I ended up switching to instanceof
... shouldn't matter as you said
Only supported when the scale is 0 (i.e. it is an integer) and <= Long.MAX_VALUE, so we can continue storing it as a long for implementation simplicity
Fixes #31 and #125