Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does interp() work on curvilinear grids (2D coordinates) ? #2281

Open
JiaweiZhuang opened this issue Jul 12, 2018 · 28 comments
Open

Does interp() work on curvilinear grids (2D coordinates) ? #2281

JiaweiZhuang opened this issue Jul 12, 2018 · 28 comments

Comments

@JiaweiZhuang
Copy link

JiaweiZhuang commented Jul 12, 2018

I am evaluating interp() against xESMF. Here's how xESMF would regrid the built-in 'rasm' data to a regular lat-lon grid.

Seems like interp() can convert rectilinear grids (1D coordinates) to curvilinear grids (2D coordinates), according to the last example. How about the reverse? Can it convert 2D coordinate to 1D, or to another 2D coordinate?

That's the test data:

dr = xr.tutorial.load_dataset('rasm')['Tair']
...
Coordinates:
  * time     (time) datetime64[ns] 1980-09-16T12:00:00 1980-10-17 ...
    xc       (y, x) float64 189.2 189.4 189.6 189.7 189.9 190.1 190.2 190.4 ...
    yc       (y, x) float64 16.53 16.78 17.02 17.27 17.51 17.76 18.0 18.25 ...

That's a simple destination grid:

lon = xr.DataArray(np.linspace(-180, 180, 120), dims=['lon'])
lat = xr.DataArray(np.linspace(-90, 90, 60), dims=['lat'])

I would expect a syntax like:

dr_out = dr.interp(xc=lon, yc=lat)

But I got ValueError: dimensions ['xc', 'yc'] do not exist, becauseinterp() only accepts dimensions (which must be 1D), not coordinates.

dr.interp(x=lon, y=lat) runs but the result is not correct. This is expected because x does not mean longitude in the original data.

@crusaderky @fujiisoup

@JiaweiZhuang
Copy link
Author

JiaweiZhuang commented Jul 12, 2018

One way I can think of to make interp() work on this example: Define a new coordinate system (i.e. two new coordinate variables) on the source curvilinear grid, and rewrite the destination coordinate using those new coordinate variables (not lat, lon anymore).

But this is absolutely too convoluted...

Updated: see Gridded with Scipy for a similar idea.

@fujiisoup
Copy link
Member

Thanks, @JiaweiZhuang

Not yet.
interp() only works on N-dimensional regular grid.
Under the hood, we are just using scipy.interpolate.interp1d and interpn.

I am happy to see curvilinear interpolation in xarray if we could find a good general API for N-dimensional array.
Do you have any proposal?

For curvilinear interpolation, we may have some arbitrariness,
e.g.

dr_out = dr.interp(xc=lon)

the resultant dimension is not well determined. Maybe we need some limitation for the arguments.

@shoyer
Copy link
Member

shoyer commented Jul 12, 2018

I think we could make dr.interp(xc=lon, yc=lat) work for the N-D -> M-D case by wrapping scipy.interpolate.griddata

@JiaweiZhuang
Copy link
Author

JiaweiZhuang commented Jul 12, 2018

Do you have any proposal?

I guess it is not an API design problem yet... The algorithm is not here since interpn doesn't deal with curvilinear grids.

I think we could make dr.interp(xc=lon, yc=lat) work for the N-D -> M-D case by wrapping scipy.interpolate.griddata

My concern with scipy.interpolate.griddata is that the performance might be miserable... griddata takes an arbitrary stream of data points in a D-dimensional space. It doesn't know if those source data points have a gridded/mesh structure. A curvilinear grid mesh needs to be flatten into a stream of points before passed to griddata(). Might not be too bad for nearest-neighbour search, but very inefficient for linear/bilinear method, where knowing the mesh structure beforehand can save a lot of computation.

Utilizingscipy.interpolate.griddata would be a nice feature, but it should probably be used for data point streams (more like a Pandas dataframe method?), not as a way to handle curvilinear grids.

PS: I have some broader concerns regarding interp vs xESMF: JiaweiZhuang/xESMF#24

@shoyer
Copy link
Member

shoyer commented Jul 13, 2018

I'd like to figure out interfaces that make it possible for external, grid aware libraries to extend indexing and interpolation features in xarray. In particular, it would be nice to be able to associate a "grid index" used for caching computation that gets passed on in all xarray operations.

@fspaolo
Copy link

fspaolo commented May 23, 2019

Great thread by @JiaweiZhuang! I just posted a question on stackoverflow about this exact problem. After hours navigating through the xarray documentation I finally found this opened issue... (since I'm new to xarray I thought the problem was my lack of understanding on how the package works)

It is surprising that a package targeting n-dimensional gridded datasets (particularly those from the geo/climate sciences) does not handle such a common task with spatial gridded data.

The problem on hand is this: I have two 3d arrays with different dimensions defined by 2d coordinates, all I want is to regrid the first cube onto the second. Is there a way to perform this operation with xarray?

This is what I've tried (which @JiaweiZhuang explained why it doesn't work):

da = xr.DataArray(cube, dims=['t', 'y', 'x'],
                  coords={'t': time,           
                          'xc': (['y', 'x'], X),  
                          'yc': (['y', 'x'], Y)})

da_interp = da.interp(x=x_new, y=y_new).interp(t=t_new)

Here x_new/y_new should be mapped onto xc/yc (physical coordinates), not onto x/y (logical coordinates).

@crusaderky
Copy link
Contributor

I am not aware of a ND mesh interpolation algorithm. However, my package xarray_extras [1] offers highly optimized 1D interpolation on a ND hypercube, on any numerical coord (not just time). You may try applying it 3 times on each dimension in sequence and see if you get what you want - although performance won't be optimal.

[1] https://xarray-extras.readthedocs.io/en/latest/

Alternatively, if you do find the exact algorithm you want, but it's for numpy, then applying it to xarray is simple - just get DataArray.values -> apply function -> create new DataArray from the output.

@fspaolo
Copy link

fspaolo commented May 24, 2019

@crusaderky I don't think we need a "proper" 3d interpolation in most cases (i.e. predicting each 3d grid node considering all dimensions simultaneously). If you see my example above, DataArray.interp(x=x_new, y=y_new).interp(t=t_new), I am first interpolating over the spatial coordinates and then over time. If instead I do DataArray.interp(x=x_new, y=y_new, t=t_new), the computing time is prohibited for large ndarrays. In fact, I think DataArray.interp() should figure this out and do this decomposition internally when calling DataArray.interp(x=x_new, y=y_new, t=t_new).

The main limitation here, however, is being able to interpolate over the spatial coordinates when these are defined as 2d arrays. I'll check your package... thanks!

@crusaderky
Copy link
Contributor

crusaderky commented May 25, 2019

@fspaolo I never tried using my algorithm to perform 2D interpolation, but this should work:

from xarray_extras.interpolate import splrep, splev

da = splev(x_new, splrep(da, 'x'))
da = splev(y_new, splrep(da, 'y'))
da = splev(t_new, splrep(da, 't'))

Add k=1 to downgrade from cubic to linear interpolation and get a speed boost.

You can play around with dask to increase performance by using all your CPUs (or more with dask distributed), although you have to remember that an original dim can't be broken on multiple chunks when you apply splrep to it:

from xarray_extras.interpolate import splrep, splev
da = da.chunk(t=TCHUNK)
da = splev(x_new, splrep(da, 'x'))
da = splev(y_new, splrep(da, 'y'))
da = da.chunk(x=SCHUNK, y=SCHUNK).chunk(t=-1)
da = splev(t_new, splrep(da, 't'))
da = da.compute()

where TCHUNK and SCHUNK are integers you'll have to play with. The rule of thumb is that you want to have your chunks 5~100 MBs each.

If you end up finding out that chunking along an interpolation dimension is important for you, it is possible to implement it with dask ghosting techniques, just painfully complicated.

@fspaolo
Copy link

fspaolo commented May 29, 2019

@crusaderky It doesn't work. First, it tells me that if x_new/y_new are 2D, then I have to pass them as DataArray's... why do I have to do that? After doing that, it tells me there are conflicts between "overlapping" dimensions?! Ok, I then pass x_new/y_new as 1D arrays... and the result comes nonsense!

The only interpolation that works (both with DataArray.interp() and your splev()) is the one over the time dimension. This is because time is defined as a one-dimensional variable (t).

Why is it so hard to perform an interpolation with spatial coordinates defined with 2D variables?! I would think this is a pretty common operation on climate datasets...

@fspaolo
Copy link

fspaolo commented May 29, 2019

@JiaweiZhuang your approach doesn't work either. After installing you package and the dependencies... and following the documentation, I got

Fatal error in MPI_Init_thread: Other MPI error, error stack:
MPIR_Init_thread(474)..............:
MPID_Init(190).....................: channel initialization failed
MPIDI_CH3_Init(89).................:
MPID_nem_init(320).................:
MPID_nem_tcp_init(173).............:
MPID_nem_tcp_get_business_card(420):
MPID_nem_tcp_init(379).............: gethostbyname failed, LMC-061769 (errno 1)
[unset]: write_line error; fd=-1 buf=:cmd=abort exitcode=3191311
:
system msg for write_line failure : Bad file descriptor

An MPI error?!

What bothers me are statements like this "For usability and simplicity"...

@crusaderky
Copy link
Contributor

crusaderky commented May 29, 2019

@fspaolo 2d mesh interpolation and 1d interpolation with extra "free" dimensions are fundamentally different algorithms. Look up the scipy documentation on the various interpolation functions available.

I don't understand what you are trying to pass for x_new and y_new and it definitely doesn't sound right. Right now you have a 3d DataArray with dimensions (x, y, t) and 3 coords, each of which is a 1d numpy array (e.g. da.coords.x.values). If you want to rescale, you need to pass a 1d numpy array or array-like for x_new, and another separate 1d array for y_new. You are not doing that, as the error message you're receiving is saying that your x_new is a numpy array with 2 or more dimensions, which the algorithm doesn't know what to do with. It can accept a multi-dimensional DataArrays with brand new dimensions, but that does not sound like it's your case.

@fspaolo
Copy link

fspaolo commented May 29, 2019

@crusaderky what I'm trying to do is what the title of this opened thread says "Does interp() work on curvilinear grids (2D coordinates)?" Working with (x, y, t) all 1D numpy arrays is an easy problem! But that's not the problem I'm dealing with.

Let me state exactly what my problem is. I have two data cubes, cube1 and cube2:

cube1 = xr.DataArray(cube1, dims=['t', 'y', 'x'],
                     coords={'time': ('t', t_cube1),           # 1D array
                             'lon': (['y', 'x'], X_cube1),     # 2D array!!!
                             'lat': (['y', 'x'], Y_cube1)})    # 2D array!!!

cube2 = xr.DataArray(cube2, dims=['y', 'x', 't'],
                     coords={'time': ('t', t_cube2),           # 1D array
                             'lon': (['y', 'x'], X_cube2),     # 2D array!!!
                             'lat': (['y', 'x'], Y_cube2)})    # 2D array!!!

All I want is to regrid cube1 onto cube2, or in other words, interpolate X_cube2/Y_cube2/t_cube2 onto cube1. Note that I cannot pass X_cube1/Y_cube1 as 1D arrays because they vary for every grid cell (this is a rotated grid). I can, however, pass X_cube2/Y_cube2 as 1D arrays (this is a standard lon/lat grid).

Also note the inverted time dimension between cube1 and cube2 (but that shouldn't be an issue).

So how to perform this operation... or am I missing something?

@shoyer
Copy link
Member

shoyer commented May 29, 2019

So how to perform this operation... or am I missing something?

Sorry, i don't think there's an easy way to do this directly in xarray right now.

My concern with scipy.interpolate.griddata is that the performance might be miserable... griddata takes an arbitrary stream of data points in a D-dimensional space. It doesn't know if those source data points have a gridded/mesh structure. A curvilinear grid mesh needs to be flatten into a stream of points before passed to griddata(). Might not be too bad for nearest-neighbour search, but very inefficient for linear/bilinear method, where knowing the mesh structure beforehand can save a lot of computation.

Thinking a little more about this, I wonder if this the performance could actually be OK as long as the spatial grid is not too big, i.e., if we reuse the same grid many times for different variables/times.

In particular, SciPy's griddata either makes use of a scipy.spatial.KDTree (for nearest neighbor lookups) and scipy.spatial.Delaunay(for linear interpolation, on a triangular mesh). We could build these data structures once (and potentially even cache them in indexes on xarray objects), and likewise calculate the sparse interpolation coefficients once for repeated use.

@JiaweiZhuang
Copy link
Author

JiaweiZhuang commented May 30, 2019

An MPI error?!

@fspaolo Could you post a minimal reproducible example on xESMF's issue tracker? Just to keep this issue clean. The error looks like an ESMF installation problem that can happen on legacy OS, and it can be easily fixed by Docker or other containers.

It is surprising that a package targeting n-dimensional gridded datasets (particularly those from the geo/climate sciences) does not handle such a common task with spatial gridded data.

Just a side comment: This is a common but highly non-trivial task... Even small edges cases like periodic longitudes and polar singularities can cause interesting troubles. Otherwise I would just code up an algorithm in Xarray from scratch instead of relying on a heavy Fortran library. But things will get improved over time...

@crusaderky
Copy link
Contributor

crusaderky commented May 30, 2019

@fspaolo sorry, I should have taken more time re-reading the initial post. No, xarray_extras.interpolate does not do the kind of interpolation you want. Have you looked into scipy?

https://docs.scipy.org/doc/scipy/reference/interpolate.html#multivariate-interpolation

xarray is just a wrapper, and if scipy does what you need, it's trivial to unwrap your DataArray into a bunch of numpy arrays, feed them into scipy, and then re-wrap the output numpy arrays into a DataArray.
On the other hand, if scipy does not do what you want, then I suspect that opening a feature request on the scipy tracker would be a much better place than the xarray board. As a rule of thumb, any fancy algorithm should first exist for numpy-only data and then potentially it can be wrapped by the xarray library.

@crusaderky
Copy link
Contributor

crusaderky commented May 30, 2019

I did not test it but this looks like what you want:

from scipy.interpolate import bisplrep, bisplev

x = cube1.x.values.ravel()
y = cube1.y.values.ravel()
z = cube1.values.ravel()
x_new = cube2.x.values.ravel()
y_new = cube2.y.values.ravel()
tck = bisplrep(x, y, z)
z_new = bisplev(x_new, y_new, tck)
z_new = z_new.reshape(cube2.shape)
cube3 = xarray.DataArray(z_new, dims=cube2.dims, coords=cube2.coords)

I read above that you have concerns about performance as the above does not understand the geometry of the input data - did you run performance tests on it already?

[EDIT] you will probably need to break down your problem on 1-point slices along dimension t before you apply the above.

@fspaolo
Copy link

fspaolo commented May 30, 2019

@shoyer and @crusaderky That's right, that is how I was actually dealing with this problem prior trying xarray ... by flattening the grid coordinates and performing either gridding (with scipy's griddata) or interpolation (with scipy's map_coordinate) ... instead of performing proper regridding (from cube to cube without having to flatten anything).

As a rule of thumb, any fancy algorithm should first exist for numpy-only data and then potentially it can be wrapped by the xarray library.

This is important information.

For the record, here is so far what I found to be best performant:

import xarray as xr
from scipy.interpolate import griddata

# Here x/y are dummy 1D coords that wont be used.
da1 = xr.DataArray(cube1, [('t', t_cube1) , ('y', range(cube1.shape[1])), ('x', range(cube1.shape[2]))])

# Regrid t_cube1 onto t_cube2 first since time will always map 1 to 1 between cubes.
# This operation is very fast.
print('regridding in time ...')
cube1 = da1.interp(t=t_cube2).values

# Regrid each 2D field (X_cube1/Y_cube1 onto X_cube2/Y_cube2), one at a time
print('regridding in space ...')
cube3 = np.full_like(cube2, np.nan)
for k in range(t_cube2.shape[0]):
    print('regridding:', k)
    cube3[:,:,k] = griddata((X_cube1.ravel(), Y_cube1.ravel()),
                            cube1[k,:,:].ravel(),
                            (X_cube2, Y_cube2),
                            fill_value=np.nan,
                            method='linear')

Performance is not that bad... for ~150 time steps and ~1500 nodes in x and y it takes less than 10 min.

I think this can be sped up by computing the interpolation weights between grids in the first iteration and cache them (I think xESMF does this).

@fspaolo
Copy link

fspaolo commented May 30, 2019

@crusaderky I also did the above using bisplrep and bispev, and it seems it cannot handle a grid of this size:

print(len(x), len(y), len(z))
62880 62880 62880

print(len(x_new), len(y_new))
2665872 2665872

Traceback (most recent call last):
  File "cubesmb.py", line 201, in <module>
    z_new = bisplev(x_new, y_new, tck)
  File "/Users/paolofer/anaconda3/lib/python3.7/site-packages/scipy/interpolate/_fitpack_impl.py", line 1047, in bisplev
    z, ier = _fitpack._bispev(tx, ty, c, kx, ky, x, y, dx, dy)
RuntimeError: Cannot produce output of size 2665872x2665872 (size too large)

So griddata looks like my best option...

@shoyer
Copy link
Member

shoyer commented May 30, 2019

The naive implementation of splines involves inverting an N x N matrix where N is the total number of grid points. So it definitely is not a very scalable technique.

@crusaderky
Copy link
Contributor

@fspaolo where does that huge number come from? I thought you said you have 1500 nodes in total. Did you select a single point on the t dimension before you applied bisplrep?

Also, (pardon the ignorance, I never dealt with geographical data) what kind of information does having your lat and lon being bidimensional convey? Does it imply lat[i, j] < lat[i +1, j] and lon[i, j] < lon[i, j+1] for any possible (i, j)?

@shoyer
Copy link
Member

shoyer commented May 30, 2019

@fspaolo where does that huge number come from? I thought you said you have 1500 nodes in total. Did you select a single point on the t dimension before you applied bisplrep?

2665872 is roughly 1600^2.

Also, (pardon the ignorance, I never dealt with geographical data) what kind of information does having your lat and lon being bidimensional convey? Does it imply lat[i, j] < lat[i +1, j] and lon[i, j] < lon[i, j+1] for any possible (i, j)?

I think this is true sometimes but not always. The details depend on the geographic projection, but generally a good mesh has some notion of locality -- nearby locations in real space (i.e., on the globe) should also nearby in projected space.


Anyways, as I've said above, I think it would be totally appropriate to build routines resembling scipy's griddata into interp() (but using the lower level KDTree/Delaunay interface). This will not be the most efficiency strategy, but should offer reasonable performance in most cases. Let's consider this open for contributions, if anyone is interested in putting together a pull request.

@fspaolo
Copy link

fspaolo commented May 30, 2019

@fspaolo where does that huge number come from? I thought you said you have 1500 nodes in total.

Not in total, I meant ~1500 on each x/y dimension (1500 x 1500). To be precise:

print(cube1.shape)  # (t, y, x)
(306, 240, 262)

print(cube2.shape)  # (y, x, t)
(1452, 1836, 104)

Did you select a single point on the t dimension before you applied bisplrep?

Yes.

Also, (pardon the ignorance, I never dealt with geographical data) what kind of information does having your lat and lon being bidimensional convey? Does it imply lat[i, j] < lat[i +1, j] and lon[i, j] < lon[i, j+1] for any possible (i, j)?

It does in my case, but cube1 is a rotated grid, meaning that lat is not the same along the x-axis neither lon is the same along the y-axis, while cube2 is a standard lon/lat grid so I can represent it simply by a 1D lon array (x-axis) and a 1D lat array (y-axis). To have them both with 1D coordinates I would have to either rotate cube2 and work with 1D rotated lon/lat, or unrotate cube1 and work with 1D standard lon/lat... but this gets me to the same problem as in the interpolation above since I have to transform (re-project) every 2D grid in the cube.

@shoyer
Copy link
Member

shoyer commented May 31, 2019 via email

@kmuehlbauer
Copy link
Contributor

Thanks for this interesting discussion. I'm currently at the point where I'm moving interpolation functions to xarray based workflow. While trying to wrap my head around this I found that this involves not only interpolation but also indexing (see #1603, #2195, #2986). Sorry if this might exceed the original intention of the issue. But it is my real use case (curvelinear grids to cartesian).

citing @shoyer's comments for convenience

In particular, SciPy's griddata either makes use of a scipy.spatial.KDTree (for nearest neighbor lookups) and scipy.spatial.Delaunay(for linear interpolation, on a triangular mesh). We could build these data structures once (and potentially even cache them in indexes on xarray objects), and likewise calculate the sparse interpolation coefficients once for repeated use.

Anyways, as I've said above, I think it would be totally appropriate to build routines resembling scipy's griddata into interp() (but using the lower level KDTree/Delaunay interface). This will not be the most efficiency strategy, but should offer reasonable performance in most cases. Let's consider this open for contributions, if anyone is interested in putting together a pull request.

Yes, if we cache the Delaunay triangulation we could probably do the entire thing in about the time it currently takes to do one time step.

Our interpolators are build upon scipy's cKDTree. They are created once for some source and target grid configuration and then just called with the wanted data. The interpolator is cached in the dataset accessor for multiple use. But this makes only sense, if there are multiple variables within this dataset. I'm thinking about how to reuse the cached interpolator for other datasets with the same source and target configuration. Same would be true for tree-based indexers, if they become available in xarray.

My current approach would be to create an xarray dataset dsT with source dimension/coordinates (and target dimensions/coordinates) and the created tree. If the source has some projection attached one could give another projection target and the target dimensions/coordinates will be created accordingly (but this could be wrapped by other packages, like geoxarray, ping @djhoese). One could even precalculate target dists, idx from the tree for faster access (I do this). Finally there should be something like ds_res = ds_src.interp_like(dsT) where one can reuse this dataset.

I'm sure I can get something working within my workflow using accessors but it would be better fitted in xarray itself imho.

@djhoese
Copy link
Contributor

djhoese commented Jul 3, 2019

@kmuehlbauer Thanks for the ping. I don't have time to read this whole thread, but based on your comment I have a few things I'd like to point out. First, the pykdtree package is a good alternative to the scipy kdtree implementation. It has been shown to be much faster and uses openmp for parallel processing. Second, the pyresample library is my main way of resampled geolocated data. We use it in Satpy for resampling, but right now we haven't finalized the interfaces so things are kind of spread between satpy and pyresample as far as easy xarray handling. Pyresample uses SwathDefinition and AreaDefinition objects to define the geolocation of the data. In Satpy the same KDTree is used for every in-memory gridding, but we also allow a cache_dir which will save the indexes for every (source, target) area pair used in the resampling.

I'm hoping to sit down and get some geoxarray stuff implemented during SciPy next week, but usually get distracted by all the talks so no promises. I'd like geoxarray to provide a low level interface for getting and converting CRS and geolocation information on xarray objects and leave resampling and other tasks to libraries like pyresample and rioxarray.

@tollers
Copy link

tollers commented Nov 14, 2019

Have there been any updates on the handling of multi-dimensional co-ordinates in xarray, in particular interpolation/indexing using such co-ordinates, as discussed here? For geographical data with curvilinear grids (using latitudes and longitudes) this issue can become a real headache to deal with.

@lgramer
Copy link

lgramer commented Sep 4, 2020

I find xarray so useful in many ways, for which I'm grateful. But there are some current limitations that force me to hesitate before recommending it to colleagues. One is this issue - lack of support (or rather, I suspect simply no "clearly stated support"?) for curvilinear coordinate systems, which are pretty much ubiquitous in the work I do. The other issue which causes me to pause before recommending xarray wholeheartedly is the complexity (and frequent slowness and errors, still - all previously reported) in dealing with GRIB2 file formats that include multiple vertical coordinate systems (e.g., products from the NCEP Unified Post Processing System - UPP). But that's an issue for another thread... Any movement on wrapping scipy griddata (or some suitably more sophisticated scipy tool) within xarray's interface?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

10 participants