-
-
Notifications
You must be signed in to change notification settings - Fork 18k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Int64 with null value mangles large-ish integers #30268
Comments
Thanks for the report. Must be an unwanted float cast in the mix as precision looks to get lost around the same range: >>> x = 2 ** 53
>>> y = 2 ** 53 + 1
>>> z = 2 ** 53 + 2
>>> s2 = pd.Series([1, 2, 3, x, y, z, np.nan], dtype="Int64")
>>> s2
0 1
1 2
2 3
3 9007199254740992
4 9007199254740992
5 9007199254740994
6 NaN Investigation and PRs certainly welcome |
This is caused by a converting the list to a numpy array, and letting numpy do type inference. Because there is a With a quick hack like: --- a/pandas/core/arrays/integer.py
+++ b/pandas/core/arrays/integer.py
@@ -205,7 +205,10 @@ def coerce_to_array(values, dtype, mask=None, copy=False):
mask = mask.copy()
return values, mask
- values = np.array(values, copy=copy)
+ if isinstance(values, list):
+ values = np.array(values, dtype=object)
+ else:
+ values = np.array(values, copy=copy)
if is_object_dtype(values):
inferred_type = lib.infer_dtype(values, skipna=True)
if inferred_type == "empty": I get the correct behaviour (by ensuring we convert the list to an object array, so we don't loose precision due to the intermediate float array, and we can do the conversion to an integer array ourselves). For a proper fix, the if/else check will need to be a bit more advanced. I think we should basically check if the input values already are a ndarray or have a Contributions certainly welcome! |
take |
is this behavior caused by the same bug?
|
#31108 is the reason! |
Fixed in #50757 |
Code Sample, a copy-pastable example if possible
With interpreter output:
Problem description
It seams that the presence of
np.nan
values in a column being typed asInt64
causes some non-null values to be mangled. This seems to happen with large-ish values (but still below the max int limit).Given that Int64 is the "Nullable integer" data type, null values should be allowed, and should certainly not silently change the values of other elements in the data frame.
Expected Output
Output of
pd.show_versions()
INSTALLED VERSIONS
commit : None
python : 3.7.1.final.0
python-bits : 64
OS : Darwin
OS-release : 19.0.0
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : en_CA.UTF-8
LOCALE : en_CA.UTF-8
pandas : 0.25.3
numpy : 1.17.4
pytz : 2019.3
dateutil : 2.8.1
pip : 10.0.1
setuptools : 39.0.1
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
The text was updated successfully, but these errors were encountered: