Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

REPS / C-REPS dual function gradient #70

Closed
RicardoDominguez opened this issue Aug 7, 2018 · 7 comments
Closed

REPS / C-REPS dual function gradient #70

RicardoDominguez opened this issue Aug 7, 2018 · 7 comments

Comments

@RicardoDominguez
Copy link
Contributor

When minimizing the dual function in REPS / C-REPS, why is the gradient approximated numerically rather than using the analytical expressions provided here? Is this for numerical stability?

@AlexanderFabisch
Copy link
Contributor

Hi @RicardoDominguez,

I don't know exactly what was the reason for that any more. Would be interesting to find out if the numerical solution is more stable or slower though. It shouldn't be too difficult to change the implementation to use the analytical solution. If you have any insight, please let us know. I will leave this issue open for now.

@RicardoDominguez
Copy link
Contributor Author

I'll change the implementation and see how it affects performance and stability. When I have some insights I'll get back to you.

@RicardoDominguez
Copy link
Contributor Author

Hey @AlexanderFabisch,

I have done some tests comparing the performance of using the numerical vs analytical computation of the gradient of the dual function, and it seems that:

  • For REPS computing the analytical solution may lead to slightly increased computational overhead but may also lead to better solutions (found minimizing the Rosenbrock function). Perhaps for more complex optimization problems the overhead of computing the gradient is not as significant so overall computational time is reduced?
  • For C-REPS computing the analytical solution may lead to much faster optimization (check this) but I found no evident of it leading to better (or worse) solutions.

Regarding stability I have had no problems during my testing. For all the exponentials I've subtracted the maximum value so the implementation should be relatively safe.

A clean implementation without any of the testing functions can be found here. Feel free to validate my results and tell me if you want me to pull request.

@AlexanderFabisch
Copy link
Contributor

For REPS computing the analytical solution may lead to slightly increased computational overhead but may also lead to better solutions (found minimizing the Rosenbrock function). Perhaps for more complex optimization problems the overhead of computing the gradient is not as significant so overall computational time is reduced?

That seems to be surprising to me. I would expect that it is faster because the optimizer will otherwise call the objective function multiple times to estimate the gradient. Also can't reproduce this:

$ python reps_rosen_time.py 
...
Numerical gradient  completed in average time of 2.03754606247 seconds
...
Analytical gradient  completed in average time of 1.91270718575 seconds

For C-REPS computing the analytical solution may lead to much faster optimization (check this) but I found no evident of it leading to better (or worse) solutions.

Looks great.

Regarding stability I have had no problems during my testing. For all the exponentials I've subtracted the maximum value so the implementation should be relatively safe.

Maybe @jmetzen remembers why we didn't implement the analytical solution?

A clean implementation without any of the testing functions can be found here. Feel free to validate my results and tell me if you want me to pull request.

A pull request would be awesome. Maybe we can also put some test scripts to the benchmark folder.

@RicardoDominguez
Copy link
Contributor Author

That seems to be surprising to me. I would expect that it is faster because the optimizer will otherwise call the objective function multiple times to estimate the gradient. Also can't reproduce this:

That is also what I expected so I am a bit puzzled by why that is sometimes not the case.

Maybe we can also put some test scripts to the benchmark folder.

Do you mean benchmarks for numerical vs analytical or for REPS / C-REPS in general?

@AlexanderFabisch
Copy link
Contributor

Maybe we can also put some test scripts to the benchmark folder.

Do you mean benchmarks for numerical vs analytical or for REPS / C-REPS in general?

You can add those scripts that you implemented to compare the numerical and analytical gradient so that they are not lost.

@AlexanderFabisch
Copy link
Contributor

We use the analytical version of both gradients in the master now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants