Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add logic for remote or local files in NetCDFSource #93

Merged
merged 7 commits into from Nov 24, 2020

Conversation

scottyhq
Copy link
Collaborator

@scottyhq scottyhq commented Nov 23, 2020

This PR follows up #82 to do a few things

  • add logic to call either fsspec.open_local() or fsspec.open() for NetCDFSource
  • makes chunks kwarg optional for RasterIOSource
  • adds many more network/remote loading tests for NetCDFSource and RasterIOSource including a mock S3 server

Fixes:

cc @rabernat @martindurant

if self._can_be_local:
url = fsspec.open_local(self.urlpath, **self.storage_options)
else:
url = fsspec.open(self.urlpath, **self.storage_options)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note the difference here, in #82 for RasterIOSource I just pass the URLs to xr.open_rasterio() but i think we need to pass file-like objects to xr.open_dataset()

if self._can_be_local:
files = fsspec.open_local(self.urlpath, **self.storage_options)
else:
files = self.urlpath

url = fsspec.open_local(self.urlpath, **self.storage_options)
else:
# https://github.com/intake/filesystem_spec/issues/476#issuecomment-732372918
url = fsspec.open(self.urlpath, **self.storage_options).open()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess this is the key line. This is what I do to make http paths "openable" in my xarray / fsspec code. But @martindurant has suggested it's not a good practice to drag around open file objects without explicitly closing them.

I'm very curious to hear Martin's view of whether this is kosher.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, it doesn't feel right, but does make the tests pass! seems like we could end up with lots of unclosed files. The only other idea that's coming to mind is having to add code to xarray such that if a OpenFile instance is passed, a context manager gets used... But I'm definitely out of my depth here

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You will keep hold of the buffer until the file instance is cleaner up, but that's OK; the random-access version of the http file-like does not keep a socket open and doesn't need explicitly to be closed. The overall file system instance does hold a session open, but that should clean up on interpreter shutdown without problem.

The only time I could see this being a problem is where the path is something like an SSH tunnel (sftp) - but the garbage collection of a file-like is always supposed to work the same as calling close() or exiting a context.

Short story: I think it's fine.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has introduced a bug when using a list of url.

Whilst I understand the motivation for moving towards file-like objects - I wonder if the NetCDFSource should defer to the backend drivers provided by XArray to resolve local/remote/opendap URLs. Since the recent changes to fsspec for async it seems that something is broke for opendap endpoints (see fsspec/filesystem_spec#525).

Regardless perhaps the code should use fsspec.open_files instead to allow for a list of URLs:

        if "*" in url or isinstance(url, list):
            _open_dataset = xr.open_mfdataset
            _is_list = True
            if self.pattern:
                kwargs.update(preprocess=self._add_path_to_ds)
            if self.combine is not None:
                if 'combine' in kwargs:
                    raise Exception("Setting 'combine' argument twice  in the catalog is invalid")
                kwargs.update(combine=self.combine)
            if self.concat_dim is not None:
                if 'concat_dim' in kwargs:
                    raise Exception("Setting 'concat_dim' argument twice  in the catalog is invalid")
                kwargs.update(concat_dim=self.concat_dim)
        else:
            _is_list = False
            _open_dataset = xr.open_dataset

        if self._can_be_local:
            url = fsspec.open_local(self.urlpath, **self.storage_options)
        else:
            # https://github.com/intake/filesystem_spec/issues/476#issuecomment-732372918            
            url = [f.open() for f in fsspec.open_files(self.urlpath, **self.storage_options)]
            if not _is_list:
                url = url[0]

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

xarray is not using fsspec for opendap endpoints, unless I am very confused

Totally agree that we should support a list, regardless.

@scottyhq
Copy link
Collaborator Author

quick update - I added quite a few new tests under test_remote.py and also added test_network.py which includes simple examples from linked issues reading over the network from http, s3, gcs (these currently are not run via CI though).

added a py39 environment file and test, and now install xarray master for upstream CI test.

I realized in putting a few more tests together that open_rasterio() required chunks as a positional argument, so I set that to a default of chunks=None since it's sometimes nice to bypass dask. This is checked in test_remote::test_open_rasterio.

@martindurant
Copy link
Member

Looks good!

For s3, you could use the same moto fixture that s3fs does, rather than hitting the real backend.
I would not try to implement the gcsfs fixture here.

@scottyhq
Copy link
Collaborator Author

scottyhq commented Nov 24, 2020

For s3, you could use the same moto fixture that s3fs does, rather than hitting the real backend.

Woohoo! That took some figuring out, but I've added basic tests for rasterio and netcdf with moto.

If it's okay with you @martindurant I'll go ahead and merge?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants