New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

adding jitted scalar maximization routine, first build #416

Merged
merged 12 commits into from Jun 29, 2018

Conversation

Projects
None yet
8 participants
@jstac
Contributor

jstac commented Jun 6, 2018

Hi @mmcky, in this PR I'm adding an optimize subpackage and a first function within that subpackage called maximize_scalar.

There's an example of usage within the documentation for that function.

The code and algorithm come from SciPy's fminbound. I've stripped out some tests and jitted the function. We require numba version 0.38 and above because the routine takes a jitted function as its first argument (the scalar function to be maximized).

The intention is to add further scalar maximization routines and at least one multivariate routine, as well as perhaps root finders. Essentially, the structure will mimic scipy.optimize and the feature set will no doubt be smaller, but every function should be jitted and accept jitted functions to act upon.

The purpose of functions in this subpackage is not necessarily to maximize speed within the optimization routine itself, but rather to ensure that we can jit compile outer functions like Bellman operators, which include a maximization routine as one step of the algorithm.

Thanks to @natashawatkins for helping me pull this together.

@sglyon @albop @cc7768 @chrishyland FYI

@coveralls

This comment has been minimized.

Show comment
Hide comment
@coveralls

coveralls Jun 6, 2018

Coverage Status

Coverage decreased (-0.07%) to 95.107% when pulling 7f24600 on add_optim_subpackage into 649cc55 on master.

coveralls commented Jun 6, 2018

Coverage Status

Coverage decreased (-0.07%) to 95.107% when pulling 7f24600 on add_optim_subpackage into 649cc55 on master.

version = '0.3.8'

This comment has been minimized.

@mmcky

mmcky Jun 6, 2018

Contributor

this will be set in setup.py
I will update this tomorrow morning.

@mmcky

mmcky Jun 6, 2018

Contributor

this will be set in setup.py
I will update this tomorrow morning.

@mmcky

This comment has been minimized.

Show comment
Hide comment
@mmcky

mmcky Jun 6, 2018

Contributor

thanks @jstac this look great - I will review more thoroughly tomorrow morning. We can open issues to track which other optimize routines we wish to add in and tag them for contributions. This will be a source of good projects.

Contributor

mmcky commented Jun 6, 2018

thanks @jstac this look great - I will review more thoroughly tomorrow morning. We can open issues to track which other optimize routines we wish to add in and tag them for contributions. This will be a source of good projects.

Show outdated Hide outdated quantecon/optimize/scalar_maximization.py Outdated
Show outdated Hide outdated quantecon/optimize/scalar_maximization.py Outdated
@oyamad

This comment has been minimized.

Show comment
Hide comment
@oyamad

oyamad Jun 6, 2018

Member

The minimum version number in

'numba>=0.36.2',
should be increased to 0.38.

Member

oyamad commented Jun 6, 2018

The minimum version number in

'numba>=0.36.2',
should be increased to 0.38.

@mmcky

This comment has been minimized.

Show comment
Hide comment
@mmcky

mmcky Jun 11, 2018

Contributor

It looks like numba=0.38 is the default version in the latest Anaconda 5.2 distribution.
https://docs.anaconda.com/anaconda/packages/py3.6_linux-64. Continuum have changed the way packages are reported from what I can tell 0.38 is the default (rather than just an available package in the anaconda repository).

Contributor

mmcky commented Jun 11, 2018

It looks like numba=0.38 is the default version in the latest Anaconda 5.2 distribution.
https://docs.anaconda.com/anaconda/packages/py3.6_linux-64. Continuum have changed the way packages are reported from what I can tell 0.38 is the default (rather than just an available package in the anaconda repository).

@albop

This comment has been minimized.

Show comment
Hide comment
@albop

albop Jun 11, 2018

Contributor

This is very interesting!
If you give me a bit of time (end of this week), I'd like to look into it a bit.
In particular, I'm curious about how this function behaves in a loop, and whether the way function arguments are passed makes any difference.

Contributor

albop commented Jun 11, 2018

This is very interesting!
If you give me a bit of time (end of this week), I'd like to look into it a bit.
In particular, I'm curious about how this function behaves in a loop, and whether the way function arguments are passed makes any difference.

@jstac

This comment has been minimized.

Show comment
Hide comment
@jstac

jstac Jun 11, 2018

Contributor

@mmcky Great news. (BTW, lots of people are having problems with the library because they're not at the latest version of Numba. I guess there's only so much we can do on this front...)

@oyamad Thanks for your feedback.

@albop It would be great to have your input. @chrishyland and his friend will be working on these and subsequent additions close to full time in July, so they can build on your results.

@chrishyland Please loop your friend Paul into this discussion.

Contributor

jstac commented Jun 11, 2018

@mmcky Great news. (BTW, lots of people are having problems with the library because they're not at the latest version of Numba. I guess there's only so much we can do on this front...)

@oyamad Thanks for your feedback.

@albop It would be great to have your input. @chrishyland and his friend will be working on these and subsequent additions close to full time in July, so they can build on your results.

@chrishyland Please loop your friend Paul into this discussion.

@mmcky mmcky requested a review from albop Jun 14, 2018

@jstac

This comment has been minimized.

Show comment
Hide comment
@jstac

jstac Jun 24, 2018

Contributor

@albop Did you still want to look at this before we merge?

Contributor

jstac commented Jun 24, 2018

@albop Did you still want to look at this before we merge?

@albop

This comment has been minimized.

Show comment
Hide comment
@albop

albop Jun 24, 2018

Contributor
Contributor

albop commented Jun 24, 2018

@thomassargent30

This comment has been minimized.

Show comment
Hide comment
@thomassargent30

thomassargent30 Jun 24, 2018

Contributor
Contributor

thomassargent30 commented Jun 24, 2018

@albop

This comment has been minimized.

Show comment
Hide comment
@albop

albop Jun 24, 2018

Contributor

@jstac: I've read the code carefully and did some timings. It all sounds good to me. In particular, the overhead of using this function vs the manual inclusion of it's logic, seems small or in-existent in the examples I have tried.
I have only two remarks related to the API:

  • The method name bears no reference to the algorithm that is used (brent I guess). But then if you want to include another algorithm, how would it be called?
  • Even in a simplified optimization function, it seems risky to me to not return the convergence information (number of iterations, tolerances, etc.). There is often some "gridpoint that doesn't converge" and it's useful to know which and why. There would be at least two options:
    • return a jitted class with val, x0, iter etc... as fields
    • return a tuple val, x0, infos where infos would be a structure (or jitted class) with the informations. If the compiler optimizes across function, one could even hope the last one is not computed if you do val, x0, = maximize_scalar( )
Contributor

albop commented Jun 24, 2018

@jstac: I've read the code carefully and did some timings. It all sounds good to me. In particular, the overhead of using this function vs the manual inclusion of it's logic, seems small or in-existent in the examples I have tried.
I have only two remarks related to the API:

  • The method name bears no reference to the algorithm that is used (brent I guess). But then if you want to include another algorithm, how would it be called?
  • Even in a simplified optimization function, it seems risky to me to not return the convergence information (number of iterations, tolerances, etc.). There is often some "gridpoint that doesn't converge" and it's useful to know which and why. There would be at least two options:
    • return a jitted class with val, x0, iter etc... as fields
    • return a tuple val, x0, infos where infos would be a structure (or jitted class) with the informations. If the compiler optimizes across function, one could even hope the last one is not computed if you do val, x0, = maximize_scalar( )
@albop

optimize subpackage is a very good idea.

@jstac

This comment has been minimized.

Show comment
Hide comment
@jstac

jstac Jun 25, 2018

Contributor

Thanks all for your input.
@oyamad @albop I've added status_flag as a return value. In particular, the function now returns a tuple with three elements, the last of which is status_flag. It takes the value 0 if the function executes normally and 1 if the maximum number of iterations is attained.

@albop It's a good point about the name. Perhaps we can leave it as is and later add a method keyword argument that defaults to brent, if necessary.

Contributor

jstac commented Jun 25, 2018

Thanks all for your input.
@oyamad @albop I've added status_flag as a return value. In particular, the function now returns a tuple with three elements, the last of which is status_flag. It takes the value 0 if the function executes normally and 1 if the maximum number of iterations is attained.

@albop It's a good point about the name. Perhaps we can leave it as is and later add a method keyword argument that defaults to brent, if necessary.

@jstac

This comment has been minimized.

Show comment
Hide comment
@jstac

jstac Jun 25, 2018

Contributor

@thomassargent30 That's a good point. A few people had problems with this at the recent workshops.

@natashawatkins Would you mind to add a news item when this is merged that emphasizes the need to update to the latest version of Numba, preferably by obtaining the latest version of Anaconda..

Contributor

jstac commented Jun 25, 2018

@thomassargent30 That's a good point. A few people had problems with this at the recent workshops.

@natashawatkins Would you mind to add a news item when this is merged that emphasizes the need to update to the latest version of Numba, preferably by obtaining the latest version of Anaconda..

@albop

This comment has been minimized.

Show comment
Hide comment
@albop

albop Jun 25, 2018

Contributor

@jstac : status_flag is really useful. I would still like to have some more information about convergence or even a trace. I would be perfectly happy if there was a way to return it some way, optionally, without too much performance cost (and I can do a PR).
method keyword is a good solution user-wise. I wonder whether the time of the string-check would be significant in an intensive loop, but it's not clear to me.
In a nutshell, I'm happy with the PR as it is.

Contributor

albop commented Jun 25, 2018

@jstac : status_flag is really useful. I would still like to have some more information about convergence or even a trace. I would be perfectly happy if there was a way to return it some way, optionally, without too much performance cost (and I can do a PR).
method keyword is a good solution user-wise. I wonder whether the time of the string-check would be significant in an intensive loop, but it's not clear to me.
In a nutshell, I'm happy with the PR as it is.

@jstac

This comment has been minimized.

Show comment
Hide comment
@jstac

jstac Jun 25, 2018

Contributor

Thanks @albop .

If we return the number of function calls that brings us up to parity with fminbound. I've implemented this via

fval, fx, info = maximize_scalar(f, a, b)

where info is a tuple containing status_flag and num_iter. I've modified the docstring accordingly.

Contributor

jstac commented Jun 25, 2018

Thanks @albop .

If we return the number of function calls that brings us up to parity with fminbound. I've implemented this via

fval, fx, info = maximize_scalar(f, a, b)

where info is a tuple containing status_flag and num_iter. I've modified the docstring accordingly.

@jstac

This comment has been minimized.

Show comment
Hide comment
@jstac

jstac Jun 25, 2018

Contributor

I'm open to changing the name to, say, brent_max if you think it's better.

Contributor

jstac commented Jun 25, 2018

I'm open to changing the name to, say, brent_max if you think it's better.

@albop

This comment has been minimized.

Show comment
Hide comment
@albop

albop Jun 25, 2018

Contributor

Don't want to nitpick, but don't you want to return fval, fx, status, info like fminbound then ? The advantage is that for all the possible minimization routines, status, always has the same simple type, while info can hold more information and be method-specific ?
As for the method name, I like brent-max, but don't feel too strongly about it. It's really up to you. If there is to be more than one method, then it would be natural to make the general method an alias to the specific one.

Contributor

albop commented Jun 25, 2018

Don't want to nitpick, but don't you want to return fval, fx, status, info like fminbound then ? The advantage is that for all the possible minimization routines, status, always has the same simple type, while info can hold more information and be method-specific ?
As for the method name, I like brent-max, but don't feel too strongly about it. It's really up to you. If there is to be more than one method, then it would be natural to make the general method an alias to the specific one.

@oyamad

This comment has been minimized.

Show comment
Hide comment
@oyamad

oyamad Jun 25, 2018

Member
  • Better to order the first two returns as xf, fval as in fminbound (instead of fval, xf)?
  • Regarding the Numba version issue, see #409.
Member

oyamad commented Jun 25, 2018

  • Better to order the first two returns as xf, fval as in fminbound (instead of fval, xf)?
  • Regarding the Numba version issue, see #409.
@jstac

This comment has been minimized.

Show comment
Hide comment
@jstac

jstac Jun 25, 2018

Contributor

I switched the order of xf and fval as requested by @oyamad and changed the function name to brent_max. The return value is unchanged otherwise, but I think this simplifies the function call and gives us enough flexibility going forward.

Unless there are other comments I think this is ready to merge @mmcky .

Contributor

jstac commented Jun 25, 2018

I switched the order of xf and fval as requested by @oyamad and changed the function name to brent_max. The return value is unchanged otherwise, but I think this simplifies the function call and gives us enough flexibility going forward.

Unless there are other comments I think this is ready to merge @mmcky .

@oyamad

This comment has been minimized.

Show comment
Hide comment
@oyamad

oyamad Jun 26, 2018

Member

The optimize subpackage should be registered in docs/qe_apidoc.py; refer to c7c9fdf for example.

Member

oyamad commented Jun 26, 2018

The optimize subpackage should be registered in docs/qe_apidoc.py; refer to c7c9fdf for example.

@mmcky

This comment has been minimized.

Show comment
Hide comment
@mmcky

mmcky Jun 26, 2018

Contributor

@jstac I will add this to the docs.

Contributor

mmcky commented Jun 26, 2018

@jstac I will add this to the docs.

@mmcky

This comment has been minimized.

Show comment
Hide comment
@mmcky
Contributor

mmcky commented Jun 26, 2018

@mmcky

This comment has been minimized.

Show comment
Hide comment
@mmcky

mmcky Jun 26, 2018

Contributor

Thanks everyone for this PR. I will merge tonight if there are no other comments.

Contributor

mmcky commented Jun 26, 2018

Thanks everyone for this PR. I will merge tonight if there are no other comments.

@jstac

This comment has been minimized.

Show comment
Hide comment
@jstac

jstac Jun 29, 2018

Contributor

Hi @mmcky, thanks for updating the docs. Could you please merge this now?

Contributor

jstac commented Jun 29, 2018

Hi @mmcky, thanks for updating the docs. Could you please merge this now?

@jstac

This comment has been minimized.

Show comment
Hide comment
@jstac

jstac Jun 29, 2018

Contributor

Hi @mmcky, I suspect that you're otherwise occupied with more important things, so just this once I'll go ahead and merge. I hope that's OK.

Contributor

jstac commented Jun 29, 2018

Hi @mmcky, I suspect that you're otherwise occupied with more important things, so just this once I'll go ahead and merge. I hope that's OK.

@jstac jstac merged commit d6130c1 into master Jun 29, 2018

1 check passed

continuous-integration/travis-ci/pr The Travis CI build passed
Details

@jstac jstac deleted the add_optim_subpackage branch Jun 29, 2018

@chrishyland chrishyland referenced this pull request Jul 4, 2018

Merged

ENH: Root finding #417

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment