New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Postgresql integer migrations are inconsistent #6272
Comments
That is a really really big number you're requesting space for there. You shouldn't be using integers for stuff like phone numbers and zip codes, use a string. You're specifying integer bytes not length of string. As a general rule, only use integers or decimals for things you need to perform mathematical operations with, or numerical comparisons. Since you'll likely never be comparing whether one zip code is greater than or less than another, or multiplying zip codes, use a string. P.S. http://www.postgresql.org/docs/8.1/static/datatype.html shows Postgres' support for integers. Max integer size is 8 bytes. |
To give you an example, a limit of 8 is 2 to the power of 8*8 (8 bits in a byte). Divide that in half to get your max value for signed integers: 9,223,372,036,854,775,807 A limit of 10 is a max value of 604,462,909,807,314,587,353,088 |
Thats makes sense... apparently I don't understand anything about databases! Still, I think the behavior is inconsistent. |
If postgres integer size is 8 bytes, then the migration produced a wrong result, not only inconsistent.. The docs say that http://editor.datatables.net/release/DataTables/extras/Editor/examples/inlineControls.html limit indeed specifies the number of maximum bytes, so phone should've been bigint and zip just int. EDIT: Looking at postgres documentation (http://www.postgresql.org/docs/8.1/static/datatype.html#DATATYPE-INT), phone should've actually been of type numeric and zip of type integer. Also I do agree with @erichmenge, but choice of datatype is another issue. |
@Sheeo zip was specified as a It would probably be best if the migration raised an exception if limit was out of range. |
# Maps logical Rails types to PostgreSQL-specific data types.
def type_to_sql(type, limit = nil, precision = nil, scale = nil)
return super unless type.to_s == 'integer'
return 'integer' unless limit
case limit
when 1, 2; 'smallint'
when 3, 4; 'integer'
when 5..8; 'bigint'
else raise(ActiveRecordError, "No integer type has byte size #{limit}. Use a numeric with precision 0 instead.")
end
end It looks like it should have raised. |
def sql_type
base.type_to_sql(type.to_sym, limit, precision, scale) rescue type
end Why is this being rescued? |
cc/ @jeremy |
Integer limit out of range should be allowed to raise. Closes rails#6272
However, set force: true and didn't set indices Fixes test failures for postgresql:: No integer type has byte size 11. Use a numeric with precision 0 instead. https://travis-ci.org/mbleigh/acts-as-taggable-on/jobs/15196579 see rails/rails#6272
However, set force: true and didn't set indices Fixes test failures for postgresql:: No integer type has byte size 11. Use a numeric with precision 0 instead. https://travis-ci.org/mbleigh/acts-as-taggable-on/jobs/15196579 see rails/rails#6272
However, set force: true and didn't set indices Fixes test failures for postgresql:: No integer type has byte size 11. Use a numeric with precision 0 instead. https://travis-ci.org/mbleigh/acts-as-taggable-on/jobs/15196579 see rails/rails#6272
However, set force: true and didn't set indices Fixes test failures for postgresql:: No integer type has byte size 11. Use a numeric with precision 0 instead. https://travis-ci.org/mbleigh/acts-as-taggable-on/jobs/15196579 see rails/rails#6272
Migrations with integer columns using postgresql are inconsistent.
Take the following migration:
After migrating, examining the output of
rake db:structure:dump
, we get the following sql:Noticing that, in the migration above, the column
phone
has a larger limit than the columnzip
I would expect to see something like the following, wherephone
is also created as abigint
column:I don't think I am expecting the wrong behavior, but maybe I am missing something with postgres...
The text was updated successfully, but these errors were encountered: