Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error reading parquet from s3 with s3fs >= 0.3.0 #27756

Closed
CJStadler opened this issue Aug 5, 2019 · 13 comments · Fixed by #27777

Comments

@CJStadler
Copy link
Contributor

commented Aug 5, 2019

Code Sample, a copy-pastable example if possible

import pandas as pd

df = pd.read_parquet('s3://my-bucket/df.parquet')

Raises

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/.../pandas/io/parquet.py", line 294, in read_parquet
    return impl.read(path, columns=columns, **kwargs)
  File "/.../pandas/io/parquet.py", line 192, in read
    parquet_file = self.api.ParquetFile(path, open_with=s3.s3.open)
AttributeError: 'S3File' object has no attribute 's3'

Problem description

In version 0.3.0 s3fs removed the S3File.s3 attribute. It is replaced by S3File.fs (which is inherited from fsspec.AbstractBufferedFile.fs.

Should pandas check the s3fs version and call the right attribute based on that?

Output of pd.show_versions()

INSTALLED VERSIONS

commit : None
python : 3.7.3.final.0
python-bits : 64
OS : Darwin
OS-release : 18.6.0
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8

pandas : 0.25.0
numpy : 1.17.0
pytz : 2019.1
dateutil : 2.8.0
pip : 19.2.1
setuptools : 41.0.1
Cython : None
pytest : 4.4.1
hypothesis : None
sphinx : 2.1.2
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : 2.8.3 (dt dec pq3 ext lo64)
jinja2 : 2.10.1
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : 0.3.1
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
s3fs : 0.3.1
scipy : 1.3.0
sqlalchemy : 1.3.5
tables : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None

@TomAugspurger

This comment has been minimized.

Copy link
Contributor

commented Aug 5, 2019

Should pandas check the s3fs version and call the right attribute based on that?

Sure.

cc @martindurant for the (possibly unintentional) API change.

@TomAugspurger

This comment has been minimized.

Copy link
Contributor

commented Aug 5, 2019

So the open_with in

parquet_file = self.api.ParquetFile(path, open_with=s3.s3.open)
will need to depend on the version of s3fs.

@martindurant

This comment has been minimized.

Copy link

commented Aug 5, 2019

Indeed this is an API change. However, I am surprised that anyone is opening a file and then using the FS methods of the attribute of that file - you presumably have the FS available directly anyway at this point.

Indeed, rather than test specifically for s3 URLs, I would strongly encourage pandas to use fsspec directly, so that then you can read from any of the implementations supported by fsspec.

@CJStadler

This comment has been minimized.

Copy link
Contributor Author

commented Aug 5, 2019

Perhaps there should be a function returning both the file and the filesystem, which can be used here instead of get_filepath_or_buffer. That would avoid S3File.s3/S3File.fs.

If that sounds like a reasonable direction I will work on a PR.

@TomAugspurger

This comment has been minimized.

Copy link
Contributor

commented Aug 5, 2019

@avicennax

This comment has been minimized.

Copy link

commented Aug 5, 2019

Ran into this issue today; just made a local, hacky in-vivo fix to the API break. Happy to help in any way to fix the issue properly.

Cheers.

@martindurant

This comment has been minimized.

Copy link

commented Aug 5, 2019

For the sake of compatibility, I can make S3File.s3 -> S3File.fs alias, if that makes life easier.

@CJStadler CJStadler referenced this issue Aug 6, 2019
4 of 5 tasks complete
@WillAyd

This comment has been minimized.

Copy link
Member

commented Aug 6, 2019

I would strongly encourage pandas to use fsspec directly

Is this compatible with the PEP 519 fspath protocol? We are dropping 3.6 soon so maybe worth looking towards that instead

@martindurant

This comment has been minimized.

Copy link

commented Aug 6, 2019

I have considered adding __fspath__ to file types (at least ones derived from fsspec.spec.AbstractBufferedFile), but my reading of the PEP is that it should return local paths. os.fspath does indeed work on files returned by the LocalFileSystem, however.

@martindurant

This comment has been minimized.

Copy link

commented Aug 6, 2019

For the interested, the implementation is as simple as

--- a/fsspec/spec.py
+++ b/fsspec/spec.py
@@ -1154,6 +1154,9 @@ class AbstractBufferedFile(io.IOBase):
     def __str__(self):
         return "<File-like object %s, %s>" % (type(self.fs).__name__, self.path)

+    def __fspath__(self):
+        return self.fs.protocol + "://" + self.path
+
@jorisvandenbossche

This comment has been minimized.

Copy link
Member

commented Aug 8, 2019

For the sake of compatibility, I can make S3File.s3 -> S3File.fs alias, if that makes life easier.

@martindurant For compatibility with released pandas versions, that might be nice? (or at least for a while?)

@martindurant

This comment has been minimized.

Copy link

commented Aug 8, 2019

@jorisvandenbossche

This comment has been minimized.

Copy link
Member

commented Aug 8, 2019

Thanks a lot!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
6 participants
You can’t perform that action at this time.