Skip to content

BUG: read_csv handling of bad lines differs depending on first line #47490

@vnlitvinov

Description

@vnlitvinov

Pandas version checks

  • I have checked that this issue has not already been reported.

  • I have confirmed this bug exists on the latest version of pandas.

  • I have confirmed this bug exists on the main branch of pandas.

Reproducible Example

import pandas as pd
import io

a1 = '''col1,col2,col3,col4,col5,col6
0,1,2,3,4,5,6,7
1960,2000-01-09,1960,04:00:00,ZZsumAqXtD,4322.0735711085945
'''

a2 = '''col1,col2,col3,col4,col5,col6
1960,2000-01-09,1960,04:00:00,ZZsumAqXtD,4322.0735711085945
0,1,2,3,4,5,6,7
'''

print('bad first line')
df1 = pd.read_csv(io.StringIO(a1), on_bad_lines='warn')
print(df1)
print(df1.columns)
print(df1.index)
print(df1.iloc[-1])

print('good first line')
df2 = pd.read_csv(io.StringIO(a2), on_bad_lines='warn')
print(df2)
print(df2.columns)
print(df2.index)
print(df2.iloc[-1])

Issue Description

When a longer-than-header line is first, Pandas thinks extra fields are for index, but when it isn't first then it's treated as a bad line.
I wasn't able to find a way to force Pandas to treat those lines as bad from the get go.

The output of the script above:

bad first line
                 col1      col2        col3         col4  col5  col6
0    1              2         3           4     5.000000   6.0   7.0
1960 2000-01-09  1960  04:00:00  ZZsumAqXtD  4322.073571   NaN   NaN
Index(['col1', 'col2', 'col3', 'col4', 'col5', 'col6'], dtype='object')
MultiIndex([(   0,          '1'),
            (1960, '2000-01-09')],
           )
col1           1960
col2       04:00:00
col3     ZZsumAqXtD
col4    4322.073571
col5            NaN
col6            NaN
Name: (1960, 2000-01-09), dtype: object
good first line
b'Skipping line 3: expected 6 fields, saw 8\n'
   col1        col2  col3      col4        col5         col6
0  1960  2000-01-09  1960  04:00:00  ZZsumAqXtD  4322.073571
Index(['col1', 'col2', 'col3', 'col4', 'col5', 'col6'], dtype='object')
RangeIndex(start=0, stop=1, step=1)
col1           1960
col2     2000-01-09
col3           1960
col4       04:00:00
col5     ZZsumAqXtD
col6    4322.073571
Name: 0, dtype: object

Expected Behavior

I would expect both "good" and "bad" csv-s to be parsed equally, maybe with a flag or option.

Installed Versions

INSTALLED VERSIONS

commit : 4bfe3d0
python : 3.8.13.final.0
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.19044
machine : AMD64
processor : Intel64 Family 6 Model 165 Stepping 2, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : Russian_Russia.1252

pandas : 1.4.2
numpy : 1.22.4
pytz : 2022.1
dateutil : 2.8.2
pip : 22.1.2
setuptools : 62.3.2
Cython : None
pytest : 7.1.2
hypothesis : None
sphinx : 5.0.1
blosc : None
feather : 0.4.1
xlsxwriter : None
lxml.etree : 4.9.0
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.2
IPython : 8.4.0
pandas_datareader: None
bs4 : 4.11.1
bottleneck : None
brotli :
fastparquet : None
fsspec : 2022.5.0
gcsfs : None
markupsafe : 2.1.1
matplotlib : 3.5.2
numba : None
numexpr : 2.7.3
odfpy : None
openpyxl : 3.0.9
pandas_gbq : None
pyarrow : 8.0.0
pyreadstat : None
pyxlsb : None
s3fs : 2022.5.0
scipy : 1.8.1
snappy : None
sqlalchemy : 1.4.37
tables : 3.7.0
tabulate : None
xarray : 2022.3.0
xlrd : None
xlwt : None
zstandard : None

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions