Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENH: CloughTocher2DInterpolator multiple times with different values #18376

Merged
merged 17 commits into from Jun 17, 2023

Conversation

hovavalon
Copy link
Contributor

Reference issue

#13306

What does this implement/fix?

The PR adds an ability to change interpolation values in an existing CloughTocher2DInterpolator object, while also saving the barycentric coordinates of interpolation points.
In a use case in which only the values at the given points change, this saves most of the function's runtime.
The PR includes basic tests.

Additional information

This is my first PR to scipy, and I am not 100% sure how to document the API, help and guidance would be appreciated.

@hovavalon hovavalon requested a review from ev-br as a code owner April 27, 2023 09:05
@j-bowhay j-bowhay added enhancement A new feature or improvement scipy.interpolate labels Apr 27, 2023
@hovavalon
Copy link
Contributor Author

Hi, I am having trouble understanding the outputs of the failed CI tests.
When I compile the code locally with meson it works, but the CI fails with the following error without additional explanation:
meson.build:1:0: ERROR: Cython compiler 'cython' cannot compile programs
How should I approach this?

@ev-br
Copy link
Member

ev-br commented May 1, 2023

This error you can ignore, it's not related to your changes and is being worked on separately.

@ev-br
Copy link
Member

ev-br commented May 1, 2023

Have to admit, I don't like this at its present form. The reasons are mirroring #10860 (comment): it adds essentially a global state to objects and the result is hard to reason about and hard to maintain.

Maybe there is a way to rework this similar to gh-17230 : add necessary hooks for a downstream subclass to override.

@ev-br ev-br added the needs-work Items that are pending response from the author label May 1, 2023
@hovavalon
Copy link
Contributor Author

Ok, I see your point.
I have added _set_xi and _set_values functions to NDInterpolatorBase and implemented them for CloughTocher2DInterpolator.
Also, I have updated the tests accordingly.
What do you think?

@hovavalon
Copy link
Contributor Author

@ev-br I would like to know your opinion on the following matter:
The function _set_xi I implemented in CloughTocher2DInterpolator is also relevant to LinearNDInterpolator, but not to NearestNDInterpolator which also subclasses NDInterpolatorBase.
How should I approach the code reuse of _set_xi in this case?

Also, the function qhull._barycentric_coordinates is called in qhull._find_simplex for every point, but then even though _clough_tocher_2d_single obtains the output, it overrides it.
In the case I am working on removing it saved 10% of _do_evaluate's runtime in the CloughTocher2DInterpolator, Is it OK if I add the commit removing it to this PR?

@ev-br
Copy link
Member

ev-br commented May 5, 2023

The function _set_xi I implemented in CloughTocher2DInterpolator is also relevant to LinearNDInterpolator, but not to > NearestNDInterpolator which also subclasses NDInterpolatorBase.
How should I approach the code reuse of _set_xi in this case?

I'd just override it in NearestNDInterpolator with an empty implementation + add a code comment to explain it.

Also, the function qhull._barycentric_coordinates is called in qhull._find_simplex for every point, but then even though _clough_tocher_2d_single obtains the output, it overrides it.
In the case I am working on removing it saved 10% of _do_evaluate's runtime in the CloughTocher2DInterpolator, Is it OK if I add the commit removing it to this PR?

Certainly, if it's not needed, it's best removed. That's an if though, this code has not been looked at for quite a while, it'd be great if you could take a close look.

One other thing which would be useful is to do an asv benchmark (similar to what was done for the RGI) to double-check there are no slowdowns.

@hovavalon hovavalon force-pushed the clough-tocher-set-values branch 2 times, most recently from 0ca7d38 to 8fb293e Compare May 7, 2023 08:07
@hovavalon
Copy link
Contributor Author

I have updated the PR such that now the barycentric coordinates are calculated in NDInterpolatorBase._set_xi, which isn't called in NDInterpolatorBase.

About the call to qhull._barycentric_coordinates, I think it might be necessary after all and will look into it at some point, but I think I it will not get into this PR.

@ev-br
Copy link
Member

ev-br commented May 7, 2023

Note that CI errors are real and need to be fixed.

@ev-br
Copy link
Member

ev-br commented May 7, 2023

My suggestion is that users maintain a subclass: #18376 (comment)

@hovavalon
Copy link
Contributor Author

@ev-br Ok, I misunderstood your first comments, sorry for that :(
I have removed the changes to the existing API, and kept the _set_values and _set_xi functions for possible usage in subclasses.
What do you think?

Copy link
Member

@ev-br ev-br left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is getting closer, but there's still work to do. Ideally, we don't attach a bunch of attributes to instances.

scipy/interpolate/_ndgriddata.py Outdated Show resolved Hide resolved
scipy/interpolate/_ndgriddata.py Outdated Show resolved Hide resolved
scipy/interpolate/_ndgriddata.py Outdated Show resolved Hide resolved
scipy/interpolate/interpnd.pyx Outdated Show resolved Hide resolved
scipy/interpolate/interpnd.pyx Outdated Show resolved Hide resolved
@hovavalon
Copy link
Contributor Author

What do you think about the current form?
I have also rebased on the main branch and solved conflicts.

Copy link
Member

@ev-br ev-br left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is getting close!

I've left several small-ish comments; two big-picture asks are:

  • can you verify that a subclass does what you want it to?
  • we'd need a benchmark to check that there is no big perf regression (might want to take a look at what RGI PR did).

scipy/interpolate/_ndgriddata.py Outdated Show resolved Hide resolved
scipy/interpolate/interpnd.pyx Outdated Show resolved Hide resolved
scipy/interpolate/interpnd.pyx Outdated Show resolved Hide resolved
scipy/interpolate/interpnd.pyx Outdated Show resolved Hide resolved
scipy/interpolate/interpnd.pyx Outdated Show resolved Hide resolved
scipy/interpolate/interpnd.pyx Show resolved Hide resolved
scipy/interpolate/interpnd.pyx Show resolved Hide resolved
@hovavalon
Copy link
Contributor Author

I have fixed the code according to your comments, and added a benchmark test with an example subclass.
As I noted, griddata is already benchmarked, so I don't think it is necessary to benchmark the old code.
I had problems running the benchmark, getting the following error:

Fatal error in launcher: Unable to create process using '"D:\bld\asv_1636402977906\_h_env\python.exe"  "C:\Users\USER\anaconda3\envs\scipy-dev\Scripts\asv.exe" ': The system cannot find the file specified.

If you have any clue on how to solve this issue I would be greateful.
Thanks!

@ev-br
Copy link
Member

ev-br commented Jun 13, 2023

Don't know what the error is, sorry. Something about asv (which is the benchmark runner), so it's something about how you installed scipy. As a data point, I just run a benchmarks for a first time in several years and it basically worked (more below). My dev setup is a $ mamba env create -f environment.yml on linux, so bog-standard.

There is indeed some issue with bench --compare. However just running a single benchmark works and I can switch branches manually. It looks like this PR is some 1.7 times slower than main for larger input sizes. This needs looking into.

Running $ python dev.py bench -t GridData I get:

On a PR (rebased on main):
==========================

$ python dev.py bench -t GridData
· No `environment_type` specified in asv.conf.json. This will be required in the future.
· Discovering benchmarks
· Running 1 total benchmarks (1 commits * 1 environments * 1 benchmarks)
[  0.00%] ·· Benchmarking existing-py_home_br_mambaforge_envs_scipy-dev_bin_python
[ 50.00%] ··· Running (interpolate.GridData.time_evaluation--).
[100.00%] ··· interpolate.GridData.time_evaluation                                    ok
[100.00%] ··· ========= ============ ============= =============
              --                         method                 
              --------- ----------------------------------------
               n_grids    nearest        linear        cubic    
              ========= ============ ============= =============
                 10j      275±3μs     6.21±0.05ms   7.24±0.08ms 
                 100j    3.82±0.2ms   8.62±0.08ms    11.3±0.5ms 
                1000j     320±5ms       181±2ms       331±1ms   
              ========= ============ ============= =============



On main ('1.12.0.dev0+1164.5f9e871') :
================================

$ python dev.py bench -t GridData
💻  ninja -C /home/br/repos/scipy/scipy/build -j4
ninja: Entering directory `/home/br/repos/scipy/scipy/build'
[7/7] Linking target scipy/interpolate/interpnd.cpython-310-x86_64-linux-gnu.so
Build OK
💻  meson install -C build --only-changed
Installing, see meson-install.log...
Installation OK
SciPy from development installed path at: /home/br/repos/scipy/scipy/build-install/lib/python3.10/site-packages
Running benchmarks for Scipy version 1.12.0.dev0+1164.5f9e871 at /home/br/repos/scipy/scipy/build-install/lib/python3.10/site-packages/scipy
· No `environment_type` specified in asv.conf.json. This will be required in the future.
· Discovering benchmarks
· Running 1 total benchmarks (1 commits * 1 environments * 1 benchmarks)
[  0.00%] ·· Benchmarking existing-py_home_br_mambaforge_envs_scipy-dev_bin_python
[ 50.00%] ··· Running (interpolate.GridData.time_evaluation--).
[100.00%] ··· interpolate.GridData.time_evaluation                                    ok
[100.00%] ··· ========= ============ ============ ============
              --                        method                
              --------- --------------------------------------
               n_grids    nearest       linear       cubic    
              ========= ============ ============ ============
                 10j      289±10μs    6.69±0.3ms   7.75±0.6ms 
                 100j    3.88±0.2ms   7.42±0.2ms   9.02±0.2ms 
                1000j     339±3ms      58.3±1ms     191±2ms   
              ========= ============ ============ ============

@hovavalon
Copy link
Contributor Author

I will look into it an update when I find the reasons.
For now I have pushed a rebased version of the branch, you may ignore it.

@hovavalon
Copy link
Contributor Author

Since I can't run the benchmarks on my computer for now, I have created a small benchmark of my own, and the differences you display do not show in it, even though it is based completely on the GridData benchmark.

The code is as follows:

from scipy import interpolate
import numpy as np


import cProfile
for method in ['nearest', 'linear', 'cubic']:
    s = f"""
import numpy as np
from scipy import interpolate
class GridData:

    def __init__(self, n_grids, method):
        self.func = lambda x, y: x*(1-x)*np.cos(4*np.pi*x) * np.sin(4*np.pi*y**2)**2
        self.grid_x, self.grid_y = np.mgrid[0:1:n_grids, 0:1:n_grids]
        self.points = np.random.rand(1000, 2)
        self.values = self.func(self.points[:, 0], self.points[:, 1])
        self.method = method

    def time_evaluation(self):
        interpolate.griddata(self.points, self.values, (self.grid_x, self.grid_y), method=self.method)
grid_data = GridData(1000, '{method}')
"""
    import timeit
    data = timeit.repeat("grid_data.time_evaluation()", s, number=100, repeat=15)
    print(f"# {method}:")
    print(data)
    print(f"# Mean: {np.mean(data)}")
    print(f"# STD: {np.std(data)}")

And I obtain the following results:

#Branch:

# nearest:
[0.01634670002385974, 0.016986999893561006, 0.01597739988937974, 0.015783799812197685, 0.014918799977749586, 0.015606899978592992, 0.01522199995815754, 0.015267099952325225, 0.01469009998254478, 0.016034400090575218, 0.015890099806711078, 0.017759300069883466, 0.016209400026127696, 0.015358799835667014, 0.015316699864342809]
# Mean: 0.01582456661077837
# STD: 0.0007699367436863699
# linear:
[3.3771730000153184, 3.081603200174868, 3.154085800051689, 3.052978300023824, 3.125168399885297, 3.0178121998906136, 3.0375663999002427, 3.0500803000759333, 3.041135599836707, 3.074869600124657, 3.0457837998401374, 2.956153300125152, 3.0647458999883384, 2.901043600169942, 5.2550554999616]
# Mean: 3.215683660004288
# STD: 0.5542656244427543
# cubic:
[3.218328900169581, 3.1084858998656273, 3.276013000169769, 3.171105199959129, 3.255917799891904, 3.2812592999543995, 3.093714299844578, 3.1348842000588775, 3.3422193999867886, 3.104086200008169, 3.206670899875462, 3.169537999900058, 3.154295900138095, 3.2070639999583364, 3.1233502000104636]
# Mean: 3.189795546652749
# STD: 0.07185193069796424

#Main:
# nearest:
[0.016603800002485514, 0.016050299862399697, 0.015192899852991104, 0.015695499954745173, 0.015167100122198462, 0.01509879995137453, 0.014981599990278482, 0.02002110006287694, 0.016280499985441566, 0.014667100040242076, 0.014549100073054433, 0.014966800110414624, 0.015341500053182244, 0.015880800085142255, 0.016552599845454097]
# Mean: 0.015803299999485414
# STD: 0.0012916204980256561
# linear:
[8.991146600106731, 6.971040299860761, 3.647712399950251, 3.721146099967882, 2.8603197000920773, 3.0786919000092894, 3.758976799901575, 2.8815309000201523, 3.017121700104326, 2.9204843998886645, 2.8852842000778764, 2.95154959987849, 2.880782499909401, 3.0072578999679536, 2.8994682000484318]
# Mean: 3.7648342133189243
# STD: 1.7222134976627812
# cubic:
[3.002680500037968, 3.09849979984574, 2.945954800117761, 2.956872000126168, 3.4364565999712795, 3.616613200167194, 3.1236976999789476, 2.999945000046864, 2.909005500143394, 3.0166422000620514, 2.9343109000474215, 2.9666826999746263, 3.161001200089231, 2.9111989999655634, 3.0782538999337703]
# Mean: 3.077187666700532
# STD: 0.19408612016837506

The changes are all within the standard deviation.
Would you mind running this code on your system and uploading your results here?
Thanks!

@ev-br
Copy link
Member

ev-br commented Jun 13, 2023

OK, something's amiss with asv indeed:

On branch
=========

In [1]: from scipy import interpolate
   ...: import numpy as np
   ...: 
   ...: 
   ...: import cProfile
   ...: for method in ['nearest', 'linear', 'cubic']:
   ...:     s = f"""
   ...: import numpy as np
   ...: from scipy import interpolate
   ...: class GridData:
   ...: 
   ...:     def __init__(self, n_grids, method):
   ...:         self.func = lambda x, y: x*(1-x)*np.cos(4*np.pi*x) * np.sin(4*np.pi*y**2)**2
   ...:         self.grid_x, self.grid_y = np.mgrid[0:1:n_grids, 0:1:n_grids]
   ...:         np.random.seed(1234)
   ...:         self.points = np.random.rand(1000, 2)
   ...:         self.values = self.func(self.points[:, 0], self.points[:, 1])
   ...:         self.method = method
   ...: 
   ...:     def time_evaluation(self):
   ...:         interpolate.griddata(self.points, self.values, (self.grid_x, self.grid_y), method=self.method)
   ...: grid_data = GridData(1000, '{method}')
   ...: """
   ...:     import timeit
   ...:     data = timeit.repeat("grid_data.time_evaluation()", s, number=100, repeat=15)
   ...:     print(f"# {method}:")
   ...:     print(data)
   ...:     print(f"# Mean: {np.mean(data)}")
   ...:     print(f"# STD: {np.std(data)}")
   ...: 
# nearest:
[0.02557973100010713, 0.023184649000540958, 0.024506173000190756, 0.024528459000066505, 0.023678429000938195, 0.02351165599975502, 0.023262643000634853, 0.0236918959999457, 0.023207704000014928, 0.023187222999695223, 0.023137394000514178, 0.023377522998998757, 0.023060597999574384, 0.022867908999614883, 0.022865633000037633]
# Mean: 0.02357650800004194
# STD: 0.0007237383117278945
# linear:
[0.7619768980002846, 0.7468089530011639, 0.7534009959999821, 0.7523642169999221, 0.7485727120001684, 0.7426214329989307, 0.7442682660002902, 0.7567170870006521, 0.7511134399992443, 0.740202926001075, 0.75225791900084, 0.7635911179986579, 0.758304341999974, 0.7555501829992863, 0.7519907589994546]
# Mean: 0.7519827499333284
# STD: 0.006516834509967425
# cubic:
[0.8645732789991598, 0.8643797979984811, 0.8622947559997556, 0.8603852550004376, 0.8489313549998769, 0.8567594539999845, 0.8631329010004265, 0.86117645000013, 0.8820437230006064, 0.8602619190005498, 0.852262829999745, 0.8535788680001133, 0.8792844020008488, 0.9921078919996944, 0.8674158220001118]
# Mean: 0.8712392469333281
# STD: 0.03342791615135722

In [2]: import scipy

In [3]: scipy.__version__
Out[3]: '1.12.0.dev0+1179.756b7e9'



On main
=======

In [1]: from scipy import interpolate
   ...: import numpy as np
   ...: 
   ...: 
   ...: import cProfile
   ...: for method in ['nearest', 'linear', 'cubic']:
   ...:     s = f"""
   ...: import numpy as np
   ...: from scipy import interpolate
   ...: class GridData:
   ...: 
   ...:     def __init__(self, n_grids, method):
   ...:         self.func = lambda x, y: x*(1-x)*np.cos(4*np.pi*x) * np.sin(4*np.pi*y**2)**2
   ...:         self.grid_x, self.grid_y = np.mgrid[0:1:n_grids, 0:1:n_grids]
   ...:         np.random.seed(1234)
   ...:         self.points = np.random.rand(1000, 2)
   ...:         self.values = self.func(self.points[:, 0], self.points[:, 1])
   ...:         self.method = method
   ...: 
   ...:     def time_evaluation(self):
   ...:         interpolate.griddata(self.points, self.values, (self.grid_x, self.grid_y), method=self.method)
   ...: grid_data = GridData(1000, '{method}')
   ...: """
   ...:     import timeit
   ...:     data = timeit.repeat("grid_data.time_evaluation()", s, number=100, repeat=15)
   ...:     print(f"# {method}:")
   ...:     print(data)
   ...:     print(f"# Mean: {np.mean(data)}")
   ...:     print(f"# STD: {np.std(data)}")
   ...: 
# nearest:
[0.025523689000692684, 0.02247163299944077, 0.022545670999534195, 0.022369820000676555, 0.022308479999992414, 0.022520472999531194, 0.02260318099979486, 0.022417642001528293, 0.023387627999909455, 0.022653680000075838, 0.0226165990006848, 0.023018896999928984, 0.02229600600003323, 0.02233276800143358, 0.02216659400073695]
# Mean: 0.02274885073359959
# STD: 0.0007984673032944607
# linear:
[0.7587269060004473, 0.7581143560000783, 0.752685040999495, 0.7534539650005172, 0.7560719280008925, 0.7641419390001829, 0.7879189149989543, 0.8076752240012866, 0.7773737789993902, 0.791687811999509, 0.8541154539998388, 0.7797074459995201, 0.7732343389998277, 0.8183191059997625, 0.8905156689997966]
# Mean: 0.7882494585999666
# STD: 0.03862530138225321
# cubic:
[0.9405428789996222, 0.9220333979992574, 0.8723713509989466, 0.8660806400002912, 0.8676546250007959, 0.9542525950000709, 0.9449014929996338, 0.8888174570001866, 0.8743216240000038, 0.8686665549994359, 0.8700385070005723, 0.8670973509997566, 0.8747927820004406, 0.8678592090000166, 0.8703313469995919]
# Mean: 0.8899841208665749
# STD: 0.03145277567041631

@hovavalon
Copy link
Contributor Author

Ok, so what else is needed for the PR to be approved?

Copy link
Member

@ev-br ev-br left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, this is getting very very close. Two small comments, and then

  • if someone has an idea of what's going on with asv benchmarks, that would be helpful
  • can you verify that this PR improves the OP use case?

scipy/interpolate/interpnd.pyx Outdated Show resolved Hide resolved
scipy/interpolate/_ndgriddata.py Outdated Show resolved Hide resolved
@hovavalon
Copy link
Contributor Author

Here is a working example for subclassing:

from scipy import interpolate
import numpy as np



shared_setup = f"""
from scipy import interpolate
import numpy as np
n_grids = 1000

func = lambda x, y: x*(1-x)*np.cos(4*np.pi*x) * np.sin(4*np.pi*y**2)**2
grid_x, grid_y = np.mgrid[0:1:n_grids, 0:1:n_grids]
points = np.random.rand(1000, 2)
values = [func(points[:, 0], points[:, 1]) + t for t in np.arange(-1,1.1,0.1)]
"""

oldclass = """
for val in values:
    interpolate.CloughTocher2DInterpolator(points, val)(grid_x, grid_y)
"""

newclass_setup = """
class CloughTocherInterpolatorValues(interpolate.CloughTocher2DInterpolator):
    def __init__(self, points, xi, tol=1e-6, maxiter=400, **kwargs):
        interpolate.CloughTocher2DInterpolator.__init__(self, points, None, tol=tol, maxiter=maxiter)
        self.xi = None
        self._preprocess_xi(*xi)
        self.simplices, self.c = interpolate.CloughTocher2DInterpolator._find_simplicies(self, self.xi)

    def _preprocess_xi(self, *args):
        if self.xi is None:
            self.xi, self.interpolation_points_shape = interpolate.CloughTocher2DInterpolator._preprocess_xi(self, *args)
        return self.xi, self.interpolation_points_shape
    
    def _find_simplicies(self, xi):
        return self.simplices, self.c

    def __call__(self, values):
        self._set_values(values)
        return super().__call__(self.xi)
"""

newclass_code = """
interpolator = CloughTocherInterpolatorValues(points, (grid_x, grid_y))
for val in values:
    interpolator(val)
"""


number = 1
repeats = 1

run_ops = {
    "old": [oldclass, shared_setup],
    "subclass": [newclass_code, "\n".join([shared_setup, newclass_setup])] 
}

import timeit
for run_op in run_ops:
    data = timeit.repeat(run_ops[run_op][0], run_ops[run_op][1], number=10, repeat=10)
    print(f"# {run_op}:")
    print(data)
    print(f"# Mean: {np.mean(data)}")
    print(f"# STD: {np.std(data)}")

The results are:

# old:
[8.16984250000678, 7.453101999824867, 7.209543599979952, 7.252109300112352, 7.754185999976471, 7.364752599969506, 6.851857700152323, 6.9910080002155155, 8.941267400048673, 7.7140397999901325]
# Mean: 7.570170890027657
# STD: 0.5858832658034183
# subclass:
[0.42856690008193254, 0.43522219988517463, 0.5436153998598456, 0.5385737998876721, 0.4428130001761019, 0.40720779984258115, 0.39274090016260743, 0.43384339986369014, 0.4344679000787437, 0.4395203001331538]
# Mean: 0.4496571599971503
# STD: 0.04802008195881562

In this case, the operation is more than x15 faster, and the more different sets of values there are to interpolate the difference gets larger. This is very helpful for realtime applications.

I have also fixed both comments, and updated the benchmark class (in case they ever work again...)

@hovavalon
Copy link
Contributor Author

It turns out I cheated a little bit, the original code could have been made faster if only one Delaunay triangulation would have been performed for all the iterations.
Anyway, even when passing both classes the triangulations the time differences are of the same scale:

# old:
[5.451684499857947, 6.480922400020063, 6.2209514998830855, 6.5187896999996156, 7.7494192000012845, 7.614329699892551, 6.9236576000694185, 8.53692699992098, 8.009030600078404, 7.648563799913973]
# Mean: 7.115427599963732
# STD: 0.9009854432717213
# subclass:
[0.46789659978821874, 0.4547214999329299, 0.4393583999481052, 0.4584745999891311, 0.4434726999606937, 0.4415132000576705, 0.44243260007351637, 0.4472308000549674, 0.4540659000631422, 0.43496869993396103]
# Mean: 0.4484134999802336
# STD: 0.009615146765158272

@hovavalon
Copy link
Contributor Author

Are the errors related to me?
I don't think so, but your approval of that would surely help :)

@ev-br
Copy link
Member

ev-br commented Jun 13, 2023

Hmm... there's a slight git glitch it seems: submodule changes should not be there.
I'd do

$ git fetch upstream
$ git rebase upstream/main
$ git submodule update --init      # this

then $ git status should not show anything about boostmath. Then force-push

$ git push origin HEAD -f

@hovavalon
Copy link
Contributor Author

Fixed (In a slightly different way).

Copy link
Member

@ev-br ev-br left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM now!

CI failure seems unrelated.

@hovavalon
Copy link
Contributor Author

Yay!
What else should be done in order for the branch to be merged?

@ev-br ev-br merged commit 3c54130 into scipy:main Jun 17, 2023
23 of 24 checks passed
@ev-br
Copy link
Member

ev-br commented Jun 17, 2023

Merged. Thank you @hovavalon and congratulations with what I believe is your first scipy contribution. Keep them coming!

@ev-br ev-br added this to the 1.12.0 milestone Jun 17, 2023
@ev-br ev-br removed the needs-work Items that are pending response from the author label Dec 4, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement A new feature or improvement scipy.interpolate
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants