Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: DataFrame.convert_dtypes fails on column that is already "string" dtype #31731

Closed
ghost opened this issue Feb 6, 2020 · 6 comments · Fixed by #31877
Closed

BUG: DataFrame.convert_dtypes fails on column that is already "string" dtype #31731

ghost opened this issue Feb 6, 2020 · 6 comments · Fixed by #31877
Labels
Milestone

Comments

@ghost
Copy link

ghost commented Feb 6, 2020

Code Sample, a copy-pastable example if possible

In [1]: df = pd.DataFrame({'A': ['a', 'b', 'c'], 'B': ['d', 'e', 'f']})
In [2]: df.dtypes
Out[2]:
A    object
B    object
dtype: object

In [3]: df1 = df.convert_dtypes()

In [4]: df1.dtypes
Out[4]:
A    string                 # <------  expected
B    string                 # <------  expected
dtype: object

In [5]: df2 = df1.convert_dtypes()

In [6]: df2.dtypes
Out[6]:
A    object                 # <------ NOT expected
B    object                 # <------ NOT expected
dtype: object

In [7]: df3['A'] = 'test'

In [8]: df3
Out[8]:
      A  B
0  test  d                 # <------  expected
1  test  e                 # <------  expected
2  test  f                 # <------  expected

In [9]: df3.dtypes
Out[9]:
A    object                 # <------ NOT expected (but understandable)
B    string                 # <------  expected
dtype: object

In [10]: df4 = df3.convert_dtypes()

In [11]: df4
Out[11]:
      A     B
0  test  b'd'                 # <------ NOT expected (Why are they bytes now???)
1  test  b'e'                 # <------ NOT expected
2  test  b'f'                 # <------ NOT expected

In [12]: df4.dtypes
Out[12]:
A    string                 # <------  expected
B    object                 # <------ NOT expected
dtype: object

Problem description

The documentation for DataFrame.convert_dtypes() claims that it will 'Convert columns to best possible dtypes using dtypes supporting pd.NA'. However, this does not appear to be the case. As you can see in Out[6] above, if the dtypes are already optimal, they will be converted back to objects.

Even worse, if some of the dtypes are optimal, but other dtypes are objects, they will be switched -- the optimal dtypes will become objects, and the objects will become optimal.

Some mysterious additional details:

  • Assigning a string to an 'optimal' column changes the dtype to object. (Out[9])
  • If one column is a string, and the other an object df.convert_dtypes() will not only cause the string column to become object and the object column to become string, it will also change the strings in the newly created object column into bytes (Out[11])

Expected Output

Anything which can be converted to one of the new nullable datatypes would be converted to one of the new nullable datatypes. Anything which is already a nullable datatype would remain as it is.

Output of pd.show_versions()

INSTALLED VERSIONS

commit : None
python : 3.7.5.final.0
python-bits : 64
OS : Windows
OS-release : 10
machine : AMD64
processor : Intel64 Family 6 Model 78 Stepping 3, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : None.None

pandas : 1.0.0
numpy : 1.18.1
pytz : 2019.3
dateutil : 2.8.1
pip : 20.0.2
setuptools : 45.1.0.post20200127
Cython : 0.29.14
pytest : 5.3.4
hypothesis : 4.54.2
sphinx : 2.3.1
blosc : None
feather : None
xlsxwriter : 1.2.7
lxml.etree : 4.4.2
html5lib : 1.0.1
pymysql : None
psycopg2 : 2.8.4 (dt dec pq3 ext lo64)
jinja2 : 2.10.3
IPython : 7.11.1
pandas_datareader: None
bs4 : 4.8.2
bottleneck : 1.3.1
fastparquet : None
gcsfs : None
lxml.etree : 4.4.2
matplotlib : 3.1.2
numexpr : 2.7.0
odfpy : None
openpyxl : 3.0.3
pandas_gbq : None
pyarrow : None
pytables : None
pytest : 5.3.4
pyxlsb : None
s3fs : None
scipy : 1.3.2
sqlalchemy : 1.3.13
tables : 3.6.1
tabulate : None
xarray : 0.14.1
xlrd : 1.2.0
xlwt : 1.3.0
xlsxwriter : 1.2.7
numba : 0.48.0

@jorisvandenbossche
Copy link
Member

@DAVIDWALES thanks for the report. I would appreciate you don't call something "completely broken", but for sure you discovered a bug.

But there are several issues here:

First, calling convert_dtypes on a column that is already "string" is indeed clearly broken:

In [32]: s = pd.Series(['a', 'b'], dtype="string")

In [33]: s.convert_dtypes()   
Out[33]: 
0    b'a'
1    b'b'
dtype: bytes8

It's bizarre where those bytes come from, at first. But this seems a bug in the infer_dtype function:

In [3]: pd.api.types.infer_dtype(pd.array(["a", "b"], dtype="string"))  
Out[3]: 'bytes'

The fact those those becomes bytes is also the reason that you see "object" as dtype after the second time you call convert_dtypes (as bytes are python objects stored in object dtype).

When calling convert_dtypes again on the dataframe containing the bytes, it is expected this stays object dtype, and doesn't become "string", because it are bytes now, and not strings (the fact that it have become bytes in the first place is still a bug of course).

Secondly:

Assigning a string to an 'optimal' column changes the dtype to object. (Out[9])

Although surprising, this is the expected behaviour. In general, when you assign to df[col], you overwrite the full column and replace it with a new, where new type inference happens.

You will see the same with other dtypes as well:

In [24]: df = pd.DataFrame({'a': pd.array(['a', 'b'], dtype="string"), 'b': [1, 2]}) 

In [26]: df.dtypes 
Out[26]: 
a    string
b     int64
dtype: object

# in the default dtype inference, a string is stored in object dtype
In [27]: df["a"] = "other"    

In [28]: df.dtypes
Out[28]: 
a    object
b     int64
dtype: object

# assigning a timestamp will give datetime64 dtype
In [29]: df["b"] = pd.Timestamp("2012-01-01")      

In [31]: df.dtypes 
Out[31]: 
a            object
b    datetime64[ns]
dtype: object

So you will see this phenomenon with all dtypes. I agree it is "extra" surprising here, because with the string you are actually assigning a value that matches the dtype of the existing column, but where the default dtype inference still uses the old "object" dtype.
We could see if we can make a special case for this situation, but introducing special cases is also not ideal in general, so not sure it is worth it here.

@jorisvandenbossche jorisvandenbossche changed the title DataFrame.convert_dtypes is completely broken. BUG: DataFrame.convert_dtypes fails on column that is already "string" dtype Feb 6, 2020
@jorisvandenbossche jorisvandenbossche added this to the 1.0.2 milestone Feb 6, 2020
@ghost
Copy link
Author

ghost commented Feb 6, 2020

@jorisvandenbossche Sorry for the dramatic title! Thanks for fixing it. :)

Thanks for the explanation. Does this mean that the best way to add a new string column consisting of repeated values is something like this?

In [2]: df = pd.DataFrame({'A': ['a', 'b', 'c']}, dtype='string')

In [3]: df
Out[3]:
   A
0  a
1  b
2  c

In [4]: df.dtypes
Out[4]:
A    string
dtype: object

In [5]: df['A'] = pd.Series(['test'] * len(df), dtype='string')

In [6]: df
Out[6]:
      A
0  test
1  test
2  test

In [7]: df.dtypes
Out[7]:
A    string
dtype: object

Is there a more straightforward way to express this, in the spirit of df['A'] = 'test'?

@jorisvandenbossche
Copy link
Member

Is there a more straightforward way to express this, in the spirit of df['A'] = 'test'?

I am afraid not. It's certainly not ideal (and you example above actually also needs to pass df.index to Series(..) in case you have a non-default index), but it's a problem in general, only aggravated by the fact that "string" dtype is not yet the default dtype.
So until that is the case, I think what you show is the best workaround ..

In the case when you are overwriting an existing column, we could special case this, but for the general case of assigning a new column with a string, you would still have the same problem. If you want, feel free to open a new issue with the example in your last comment (and so we can keep this issue for the string to bytes bug)

@ghost
Copy link
Author

ghost commented Feb 6, 2020

I've added another issue to consider better ways to assign scalars to nullable columns: #31763

@sebastian-de
Copy link

I don't know if this is a separate bug, but I get UnicodeEncodeError: 'ascii' codec can't encode character '\xe4' in position 0: ordinal not in range(128), when running DataFrame.convert_dtypes twice on a Dataframe containing special characters like German umlauts.

Code Sample

df = pd.DataFrame({'A': ['ä', 'ö', 'ü'], 'B': ['d', 'e', 'f']})
df1 = df.convert_dtypes()
df2 = df1.convert_dtypes()

Full traceback

Traceback (most recent call last):

  File "<ipython-input-4-a12af5992878>", line 1, in <module>
    runfile('/home/test/pd_test.py', wdir='/home/test')

  File "/usr/lib/python3.8/site-packages/spyder_kernels/customize/spydercustomize.py", line 827, in runfile
    execfile(filename, namespace)

  File "/usr/lib/python3.8/site-packages/spyder_kernels/customize/spydercustomize.py", line 110, in execfile
    exec(compile(f.read(), filename, 'exec'), namespace)

  File "/home/test/pd_test.py", line 13, in <module>
    df2 = df1.convert_dtypes()

  File "/usr/lib/python3.8/site-packages/pandas/core/generic.py", line 6049, in convert_dtypes
    results = [

  File "/usr/lib/python3.8/site-packages/pandas/core/generic.py", line 6050, in <listcomp>
    col._convert_dtypes(

  File "/usr/lib/python3.8/site-packages/pandas/core/series.py", line 4393, in _convert_dtypes
    result = input_series.astype(inferred_dtype)

  File "/usr/lib/python3.8/site-packages/pandas/core/generic.py", line 5698, in astype
    new_data = self._data.astype(dtype=dtype, copy=copy, errors=errors)

  File "/usr/lib/python3.8/site-packages/pandas/core/internals/managers.py", line 582, in astype
    return self.apply("astype", dtype=dtype, copy=copy, errors=errors)

  File "/usr/lib/python3.8/site-packages/pandas/core/internals/managers.py", line 442, in apply
    applied = getattr(b, f)(**kwargs)

  File "/usr/lib/python3.8/site-packages/pandas/core/internals/blocks.py", line 607, in astype
    values = self.values.astype(dtype)

  File "/usr/lib/python3.8/site-packages/pandas/core/arrays/string_.py", line 260, in astype
    return super().astype(dtype, copy)

  File "/usr/lib/python3.8/site-packages/pandas/core/arrays/base.py", line 443, in astype
    return np.array(self, dtype=dtype, copy=copy)

  File "/usr/lib/python3.8/site-packages/pandas/core/arrays/numpy_.py", line 184, in __array__
    return np.asarray(self._ndarray, dtype=dtype)

  File "/usr/lib/python3.8/site-packages/numpy/core/_asarray.py", line 85, in asarray
    return array(a, dtype, copy=False, order=order)

UnicodeEncodeError: 'ascii' codec can't encode character '\xe4' in position 0: ordinal not in range(128)

Output of pd.show_versions()

INSTALLED VERSIONS
commit : None
python : 3.8.1.final.0
python-bits : 64
OS : Linux
OS-release : 5.5.2-arch1-1
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : de_DE.UTF-8
LOCALE : de_DE.UTF-8

pandas : 1.0.1
numpy : 1.18.1
pytz : 2019.3
dateutil : 2.8.1
pip : None
setuptools : 44.0.0
Cython : 0.29.15
pytest : 5.3.5
hypothesis : None
sphinx : 2.4.0
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.4.2
html5lib : None
pymysql : None
psycopg2 : 2.8.4 (dt dec pq3 ext lo64)
jinja2 : 2.11.1
IPython : 7.12.0
pandas_datareader: 0.8.1
bs4 : None
bottleneck : 1.3.1
fastparquet : None
gcsfs : None
lxml.etree : 4.4.2
matplotlib : 3.1.3
numexpr : 2.7.1
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pytest : 5.3.5
pyxlsb : None
s3fs : None
scipy : 1.4.1
sqlalchemy : 1.3.13
tables : None
tabulate : None
xarray : None
xlrd : 1.2.0
xlwt : None
xlsxwriter : None
numba : None
None

@jorisvandenbossche
Copy link
Member

Yes, that's probably the same issue. Thanks for reporting!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants