-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integer datatypes with missing values changes to object in python using pyreadr package, after importing data from RData file #87
Comments
yes, this is currently the correct expected behavior. Why? because in older version of pandas it only used numpy arrays in the back, if you have a numpy array of type integer, it would not let you put a np.nan, which is of a float type, therefore to mix integers and nans, you needed an object type series. In more recent version of pandas, they have introduced a nullable integer pandas array (IntegerArray. I can try to use that one, since using the integer numpy array is still impossible. Would that help? |
I think it would be great to have nullable integer pandas array, because it would help to maintain the data types integrity when sending data.frames back an forth from Python to R. |
Could you please elaborate on why you need the type of the column to be nullable integer instead of object? Right now the python object column with integer values should be correctly translated into an R Integer type column, so if the main use case is going back and forth between python and R, there is currently no issue and no change needed. Changing the datatype to nullable integer is not so trivial, and in addition can potentially break people's code (somebody in her code is expecting an object column and now suddenly gets a nullable integer type), so I am reluctant to do it unless there is a strong reason. |
Another thing to take into consideration and will not be solved by the nullable type by itself (this is described in the Readme ) but you can solve manually already is the following: R integer type is a 32 bit integer. Python has 64, 32, 16 and 8 bit integers also with unsigned versions. When you read a RData file with an integer type it will be translated to a numpy 32 bit integer. If you immediately write it back to R, it will be translated to an Integer column. However if you create your own integer columns in pandas, by default they are 64 bit integer. This cannot be converted to a 32 bit integer, because there is the risk of an overflow. Therefore it has to be converted to a R numeric type (float64) to avoid any overflow. That means, if you have integer columns in your data, you have to make sure they are or convert them manually to an 32 bit integer (you can use the nullable 32 bit version) and then it will be converted to an R integer column type. |
closed since there is no activity. |
I want to execute some python functions using data from '.RData' file. I am using the 'pyreadr' python package for the same.
Steps to reproduce the behavior.
Here is example of R Code
The reason I am storing it in different files is because the 'test_data.RData' is successfully loaded in python, however the 'test_missing_data.RData' is converting values with NA data to object rather than integer datatype.
Here is the Python Code
Setup Information:
How did you install pyreadr? - pip
Platform (windows, macOS, linux, 32 or 64 bit) - Windows 64bit
Python Version - 3
Python Distribution (System, plain python, Anaconda) - Reticulate Python
Stackoverflow Link - https://stackoverflow.com/questions/73217048/integer-datatypes-with-missing-values-changes-to-object-in-python-using-pyreadr
There is no error message, however I need the datatype to remain in integer even with missing or NA values.
Thank you for your time and help.
The text was updated successfully, but these errors were encountered: