You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The seperator sep is causing issues when skiprows is requested to skip over rows that contain the seperator. Using 16 (the correct number of rows to skip) causes an EmptyDataError.
Instead the number of rows needs to be worked out by trial and error, in this case the skiprows value needs to be 11.
Using the call below without sep does not cause issues (in terms of skipping the correct number of rows): pd.read_csv(StringIO(text),skiprows=16)
Expected Output
pd.read_csv('test.TXT',skiprows=16,sep=';')
Timestamp T ID P
0 16T122109957 0 6 6
Output of pd.show_versions()
INSTALLED VERSIONS
------------------
commit : None
python : 3.8.1.final.0
python-bits : 64
OS : Windows
OS-release : 10
machine : AMD64
processor : Intel64 Family 6 Model 142 Stepping 9, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_United Kingdom.1252
Why do you think sep matters? When I replace your ; with , and use the default sep I see the same EmptyDataError.
EDIT: Perhaps I misread your post a bit. But just for clarity, the actual value of sep doesn't matter, just the fact that the separator is in one of the skipped rows.
I have checked that this issue has not already been reported.
This may be linked to issue read_csv() - "skiprows" should ignore "comment" #23067
I have confirmed this bug exists on the latest version of pandas.
(optional) I have confirmed this bug exists on the master branch of pandas.
Code Sample, a copy-pastable example
Problem description
The seperator
sep
is causing issues when skiprows is requested to skip over rows that contain the seperator. Using 16 (the correct number of rows to skip) causes anEmptyDataError
.Instead the number of rows needs to be worked out by trial and error, in this case the skiprows value needs to be 11.
Using the call below without sep does not cause issues (in terms of skipping the correct number of rows):
pd.read_csv(StringIO(text),skiprows=16)
Expected Output
Output of
pd.show_versions()
pandas : 1.0.4
numpy : 1.18.1
pytz : 2019.3
dateutil : 2.8.1
pip : 20.1.1
setuptools : 41.2.0
Cython : None
pytest : 5.4.2
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.5.0
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.11.2
IPython : 7.15.0
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : 4.5.0
matplotlib : 3.2.1
numexpr : 2.7.1
odfpy : None
openpyxl : 3.0.3
pandas_gbq : None
pyarrow : None
pytables : None
pytest : 5.4.2
pyxlsb : None
s3fs : None
scipy : 1.4.1
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : 1.2.0
xlwt : None
xlsxwriter : None
numba : 0.49.1
The text was updated successfully, but these errors were encountered: