Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance benchmarking (fit) #14

Open
20 tasks
tlienart opened this issue Aug 30, 2019 · 0 comments
Open
20 tasks

Performance benchmarking (fit) #14

tlienart opened this issue Aug 30, 2019 · 0 comments

Comments

@tlienart
Copy link
Collaborator

tlienart commented Aug 30, 2019

Before starting this, need a way to systematically:

  • trace number of function calls, number of gradient calls, number of hessian calls
  • have a way to stop with universal criterion OR show a plot where the objective function decreases and eventually hits the same value as that from ref package.

Against scikitlearn

expect on par or better

  • ridge (in big case should see improvements from using CG)
    • analytical (should see no real diff)
    • CG
  • lasso
    • FISTA
    • ISTA
  • elnet
  • logistic (no or l2 penalty)
  • logistic (elnet penalty)
    • FISTA
    • ISTA
  • multinomial (no or l2 penalty)
  • multinomial (elnet penalty)
    • FISTA
    • ISTA

Against quantreg

expect a bit worse (quantreg is effectively in cpp)

  • quantile regression
@tlienart tlienart changed the title Benchmarking Performance benchmarking Aug 30, 2019
@tlienart tlienart changed the title Performance benchmarking Performance benchmarking (fit) Sep 13, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant