Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CC-311: support for Decimal logical type as incrementing column #129

Merged
merged 1 commit into from
Sep 14, 2016

Conversation

shikhar
Copy link
Contributor

@shikhar shikhar commented Sep 13, 2016

Only supported when the scale is 0 (i.e. it is an integer) and <= Long.MAX_VALUE, so we can continue storing it as a long for implementation simplicity

Fixes #31 and #125

@shikhar
Copy link
Contributor Author

shikhar commented Sep 13, 2016

@confluentinc/connect for review

Only supported when the scale is 0 (i.e. it is an integer) and <= Long.MAX_VALUE, so we can continue storing it as a long for implementation simplicity

Fixes #31 and #125
Copy link
Contributor

@ewencp ewencp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some minor questions. This looks good as is unless we want to make the decimal values that are accepted more liberal (though I doubt anyone actually needs more than Long.MAX_VALUE).

if (decimal.compareTo(LONG_MAX_VALUE_AS_BIGDEC) > 0) {
throw new ConnectException("Decimal value for incrementing column exceeded Long.MAX_VALUE");
}
if (decimal.scale() != 0) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An alternative to these restrictions would be to just make TimestampIncrementingOffset use a BigDecimal and convert the other types (or even make it a composite where you can choose the long or the BigDecimal. Any reason not to be a bit more liberal by doing that? Performance is the only reason I can think of.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was hesitant to store BigDecimal when it's some other int type because wasn't sure if JDBC drivers would handle statement.setBigDecimal() well for such columns.

BTW, I tried to be more general in a local iteration by storing Comparable, in fact even including support for String types like in #97 but abandoned after complexity around default values. I have an idea now of how to handle that though, so I'll give this another go.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Backwards compatibility is another concern to take care of... so we should probably always map to Long for any of the currently supported int types.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wait, this isn't going to get stored in the database, right? This is in the source so we are extracting it. I think the major difference would be that the offsets would change from their integer type to Decimal.

Copy link
Contributor Author

@shikhar shikhar Sep 14, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so I'll give this another go.

Er, re-abandoned string support. The issue is, if there is no existing stored offset, you have a null but no type. While a sensible default value can be determined ("" for string, -1 for numbers..), need type-info at bind-time.

I think happy with the solution for now. We could expand support to all Number types rather than just upto Long.MAX_VALUE if there is a request for it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wait, this isn't going to get stored in the database, right? This is in the source so we are extracting it.

It gets stored in the offset storage, and used for range queries

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, makes sense.

}

private boolean isIntegralPrimitiveType(Object incrementingColumnValue) {
return incrementingColumnValue instanceof Long
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably doesn't matter much in practice, but any reason we switched to checking the value rather than the schema?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No good reason :-) Can't recall how I ended up switching to instanceof... shouldn't matter as you said

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants