New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Logistic Regression example #3

Open
wants to merge 7 commits into
base: master
from

Conversation

Projects
None yet
2 participants
@iewaij

iewaij commented Mar 27, 2018

A very primitive draft of Logistic Regression example using Optim.jl, future improvement includes:

Any suggestions are welcome!

@pkofod

This comment has been minimized.

Show comment
Hide comment
@pkofod

pkofod May 25, 2018

Contributor

Cool!

I think you should maybe focus on comparisons within Optim before considering going down other routes (mini-batch GD or IRLS), include run time as well, so highlight differences between methods that needs the hessian and those that do not (and even compared to heuristics as well).

Is this not a real data set btw? (re your initial point)

Contributor

pkofod commented May 25, 2018

Cool!

I think you should maybe focus on comparisons within Optim before considering going down other routes (mini-batch GD or IRLS), include run time as well, so highlight differences between methods that needs the hessian and those that do not (and even compared to heuristics as well).

Is this not a real data set btw? (re your initial point)

@iewaij

This comment has been minimized.

Show comment
Hide comment
@iewaij

iewaij May 25, 2018

The data is from the book ISLR, not sure if it's real, but it seems like the point was stuck at the local minimum. If reset initial value at [-5.0, -5.0, -5.0], the parameters become -4.99982, -4.5396, 0.00012985. I should have passed random initial values.

Okay I will add the benchmarks & traces between different methods, looks quite interesting and can illustrate how different method works.

iewaij commented May 25, 2018

The data is from the book ISLR, not sure if it's real, but it seems like the point was stuck at the local minimum. If reset initial value at [-5.0, -5.0, -5.0], the parameters become -4.99982, -4.5396, 0.00012985. I should have passed random initial values.

Okay I will add the benchmarks & traces between different methods, looks quite interesting and can illustrate how different method works.

@iewaij

This comment has been minimized.

Show comment
Hide comment
@iewaij

iewaij Jun 4, 2018

After selecting subgroup(non-student) of the data, the local minimum problem is gone and results seem convincing. I will put local minimum problem in method comparison notebook since such issues are common for gradient/hessian-required methods.

iewaij commented Jun 4, 2018

After selecting subgroup(non-student) of the data, the local minimum problem is gone and results seem convincing. I will put local minimum problem in method comparison notebook since such issues are common for gradient/hessian-required methods.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment