Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

additional slide for learning rate #42

Open
JuliaHerbinger opened this issue Apr 28, 2021 · 1 comment
Open

additional slide for learning rate #42

JuliaHerbinger opened this issue Apr 28, 2021 · 1 comment

Comments

@JuliaHerbinger
Copy link

add a slide at the end of "gradient boosting concept" chunk about line search vs. constant learning rate and explain how line search works and why we use a constant learning rate.

@mb706 mb706 transferred this issue from another repository Nov 30, 2022
@mb706 mb706 transferred this issue from slds-lmu/dummyrepo Dec 1, 2022
@mb706 mb706 transferred this issue from slds-lmu/i2ml Dec 1, 2022
@lisa-wm
Copy link
Contributor

lisa-wm commented Jan 20, 2023

add from issue 896:
We always treat the learning rate as individual per iteration which is calculated by line search. But I think most (maybe all) implementations use a constant learning rate. Maybe it's better to introduce everything with a constant learning rate and mention in the end that the learning rate can also be calculated by line search.

@mb706 mb706 transferred this issue from slds-lmu/lecture_i2ml Jan 28, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants