New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
COMPAT: infer larger than unit64 to object dtype #18584
Comments
@qiaobz : Thanks reporting this! Unfortunately, this is a little hard to reproduce. Could you provide a reproducible example by creating a small table in memory (using Python's |
sorry for troubling you
and the ouput is, (no error)
|
@qiaobz : Interesting...can you provide a stacktrace for when you try to read from the Oracle DB? If so, can you add it to your original issue too? That will be useful for anyone in the future. |
my fault, I add it now
|
Hmm, that's very strange. Here's what I suggest you do:
Add a line above to print the parameters to |
thx, I try to new a conda environment. but my pandas in this issue is 0.21.0, you can see the INSTALLED VERSIONS(pd.show_versions() output) |
Right, which is why I was asking you what value the Oracle DB is returning compared to the in-memory version. Were you able to figure that out by following my instructions? |
the parameters to
Hmm, so I complicate the problem.
then casue error |
If you really need integers this large then you have to explicitly set them with
|
Shouldn't we infer this? Numpy does:
|
I don't see why not? Is there any reason why historically we didn't do it? |
two little question
|
yeah this is a bit of an unhandled case in @gfyoung would you have a look? |
For integers larger than what uint64 can handle, we gracefully default to the object dtype instead of overflowing. Closes pandas-devgh-18584.
For integers larger than what uint64 can handle, we gracefully default to the object dtype instead of overflowing. Closes pandas-devgh-18584.
For integers larger than what uint64 can handle, we gracefully default to the object dtype instead of overflowing. Closes pandas-devgh-18584.
For integers larger than what uint64 can handle, we gracefully default to the object dtype instead of overflowing. Closes pandas-devgh-18584.
For integers larger than what uint64 can handle, we gracefully default to the object dtype instead of overflowing. Closes pandas-devgh-18584.
For integers larger than what uint64 can handle, we gracefully default to the object dtype instead of overflowing. Closes pandas-devgh-18584.
For integers larger than what uint64 can handle, we gracefully default to the object dtype instead of overflowing. For integers smaller than what int64 can handle, we gracefully default to the object dtype instead of overflowing. Closes gh-18584.
Code Sample, a copy-pastable example if possible
Problem description
raise "OverflowError: long too big to convert", it may be the uint64 problem?
Expected Output
Output of
pd.show_versions()
INSTALLED VERSIONS
commit: None
python: 2.7.13.final.0
python-bits: 64
OS: Windows
OS-release: 8.1
machine: AMD64
processor: Intel64 Family 6 Model 58 Stepping 9, GenuineIntel
byteorder: little
LC_ALL: None
LANG: None
LOCALE: None.None
pandas: 0.21.0
pytest: 3.0.7
pip: 9.0.1
setuptools: 27.2.0
Cython: 0.25.2
numpy: 1.13.3
scipy: 0.19.0
pyarrow: None
xarray: None
IPython: 5.3.0
sphinx: 1.5.6
patsy: 0.4.1
dateutil: 2.6.1
pytz: 2017.3
blosc: None
bottleneck: 1.2.1
tables: 3.2.2
numexpr: 2.6.2
feather: None
matplotlib: 2.0.2
openpyxl: 2.4.7
xlrd: 1.0.0
xlwt: 1.2.0
xlsxwriter: 0.9.6
lxml: 3.7.3
bs4: 4.6.0
html5lib: 0.999
sqlalchemy: 1.1.9
pymysql: None
psycopg2: None
jinja2: 2.9.6
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
The text was updated successfully, but these errors were encountered: