Skip to content
This repository has been archived by the owner on Dec 1, 2023. It is now read-only.

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Try passing into scipy.optimize.differential_evolution #42

Closed
saulshanabrook opened this issue Apr 3, 2019 · 2 comments
Closed

Try passing into scipy.optimize.differential_evolution #42

saulshanabrook opened this issue Apr 3, 2019 · 2 comments

Comments

@saulshanabrook
Copy link
Collaborator

saulshanabrook commented Apr 3, 2019

The idea here is to take a scipy function like scipy.optimize.differential_evolution and be able to execute it on different hardware.

If the hardware is still immediate, then we could do this possibly by passing in a CuPy array or something like that.

The more general case, however, is to first build up an expression graph for the computation, as written, then choose how to execute it. This would be useful if we wanna explore any sort of MoA optimizations or translate to a non eager form like Tensorflow. Basically by not doing things eagerly, we can optimize in ways that are impossible.

For example, this is what Numba does. It takes a whole function and optimizes it as a whole. And you can get much better performance this way.

https://github.com/scipy/scipy/blob/v1.2.1/scipy/optimize/_differentialevolution.py#L649

@saulshanabrook
Copy link
Collaborator Author

I took a look at this function to see what we would need to "mock" to be able to turn this function into an expression graph.

One question is how we get the data into a graph, instead of a literal. The data comes from a RNG. We can pass in a custom RNG that we fake it into thinking is a numpy RNG subclass.

So that can work. But then the core flow of the algorithm is a loop with checking each iteration if we are done or not.

We cannot override this control flow and turn it into a graph. jax has the same issue, that it cannot process things like if statements if they are based on dynamic data.

So it seems like to make this library useful in passing into SciPy, we would need a way to process control flow.

Numba does this already.

If we take a step back, what we are doing is having to translate between statement blocks and an expression graph. This seems to be explored in the compiler community as translating between control flow graphs and value state dependence graphs:

Screen Shot 2019-04-05 at 11 57 11 AM

cc @rgommers who brought up this original question.

@rgommers
Copy link
Contributor

rgommers commented Apr 5, 2019

So it seems like to make this library useful in passing into SciPy, we would need a way to process control flow.

I think this particular problem is mostly specific to scipy.optimize. Pretty much all optimizers and root finders have this kind of tolerance-based termination of iteration. Most of the rest of SciPy does not.

The issue you'll encounter more is probably functions implemented in compiled code or calling compiled code expecting real numpy arrays under the hood.

@metadsl metadsl locked and limited conversation to collaborators Jul 26, 2022
@saulshanabrook saulshanabrook converted this issue into discussion #135 Jul 26, 2022

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Projects
None yet
Development

No branches or pull requests

2 participants