New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added decimal variable precision / scale support #2353

Merged
merged 2 commits into from Nov 30, 2017

Conversation

Projects
None yet
2 participants
@ivome
Contributor

ivome commented Nov 25, 2017

The decimal type in postgres has the functionality, that when there is no precision and scale defined, you can store values of any precision and scale in the column. (DECIMAL and NUMERIC are identical)

Specifying:

NUMERIC

without any precision or scale creates a column in which numeric values of any precision and scale can be stored, up to the implementation limit on precision. A column of this kind will not coerce input values to any particular scale, whereas numeric columns with a declared scale will coerce input values to that scale. (The SQL standard requires a default scale of 0, i.e., coercion to integer precision. We find this a bit useless. If you're concerned about portability, always specify the precision and scale explicitly.)

Right now knex passes the default values of 8 and 2 to the field. This pull request exposes the variable precision / scale functionality for postgres when there is no precision / scale provided for the column.

Let me know if I need to change anything or if there is a better way to add support for that.

@elhigu

Thanks for PR this looks like a reasonable thing to support.

To keep backwards compatibility it would be better to implement this by passing options object as a parameter. Like .decimal('decCol', { noPrecision: true }). That option should be implemented to all dialects and it needs documentation.

@ivome

This comment has been minimized.

Contributor

ivome commented Nov 27, 2017

UPDATE: PR is updated, see code + documentation for implementation.

Thank you for the feedback @elhigu !

Just for clarification: The decimal function would then allow different types as the second argument

Without precision and scale:

.decimal('colname'); // Equivalent to .decimal(8, 2) via defaults

With precision + scale:

.decimal('colname', 12, 3);

What about precision only?

.decimal('colname', 12); 

As per SQL specification, DECIMAL(12) should result in a scale of 0. Knex currently sets a scale of 2.

With options:

const options = {
  noPrecision: true,
  
  // What about precision and scale via options?
  precision: 10,
  scale: 4
};
.decimal('colname', options)

This does not seem so clean having one argument with different input types and different behavior. Is that implemented for any other field type like that? I couldn't find any by quickly looking through documentation. Or should we switch the configuration to a single options argument only for the decimal field? This would definitely break backwards compatibility plus or would have to add type checks etc.

I think the cleanest solution would be as follows:

.decimal(); // No precision, equivalent to DECIMAL 

The dialects that don't support that could fallback to the current behavior decimal(8,2). For the dialects that support the functionality, they should still be able to store and read the same data from the column, even if the datatype of the column changes, so I don't think breaking backwards compatibility would be that big of a deal in this particular case.

.decimal(precision); // Precision, scale of 0. Equivalent to DECIMAL(<precision>)

Per SQL specification, this should result in a scale of 0. Currently scale would be 2 with knex. We could optionally clean this up if we change the functionality.

.decimal(precision, scale); // Behavior as currently implemented in knex DECIMAL(precision, scale)

Let me know how we should proceed here. I am slightly in favor of the approach as it is implemented in this PR, since it would not really break any code besides changing the column type plus it exposes the additional functionality, is closer to the SQL standard and does not introduce additional complexity (different argument type cases).

@ivome

This comment has been minimized.

Contributor

ivome commented Nov 30, 2017

@elhigu I changed the implementation to be fully backwards compatible. For dialects that don't support the functionality, an exception is raised. Unit-Tests were added for all dialects and documentation is updated.

Let me know if I need to change anything.

@ivome ivome changed the title from Added postgres decimal variable precision / scale support to Added decimal variable precision / scale support Nov 30, 2017

@elhigu

This comment has been minimized.

Collaborator

elhigu commented Nov 30, 2017

Still hacky, but its fine with passing the null :) Sorry for not answering this earlier.

I've been really concerned about backwards compatibility specially with schema methods, because changing them will mean, that people who has done migrations to DB would need to remember to fix all their old migration scripts to be compatible with new format to make for example test code to behave the same way with old installed system, where migrations has already been applied.

I that other backwards incompatible changes also require changing code, but this kind of changes are usually even more hard to spot and can cause really subtle failures. Specially, because we couldn't have added any warnings / deprecation messages to the old plain .decimal() API.

This does not seem so clean having one argument with different input types and different behavior. Is that implemented for any other field type like that? I couldn't find any by quickly looking through documentation. Or should we switch the configuration to a single options argument only for the decimal field? This would definitely break backwards compatibility plus or would have to add type checks etc.

We've been trying to get rid of bunch of magic parameters lately, this is not the first API where options object has been suggested instead of list of vague arguments. Schema.timestamps method is good example of that (though it isn't fixed either yet IIRC).

@elhigu

This is good like this 👍

@elhigu

elhigu approved these changes Nov 30, 2017

@elhigu elhigu merged commit fbf371f into tgriesser:master Nov 30, 2017

1 check passed

continuous-integration/travis-ci/pr The Travis CI build passed
Details
@ivome

This comment has been minimized.

Contributor

ivome commented Nov 30, 2017

Thanks for the quick merge! 👍

Still hacky, but its fine with passing the null :) Sorry for not answering this earlier.

I know, not the cleanest, but that was the best I could come up with without breaking backwards compatibility. 😄 Maybe a project for the next major release...

I've been really concerned about backwards compatibility specially with schema methods, because changing them will mean, that people who has done migrations to DB would need to remember to fix all their old migration scripts to be compatible with new format to make for example test code to behave the same way with old installed system, where migrations has already been applied.

Thanks for keeping the quality up! This library here has helped me a lot!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment