Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: typos in regression chapter #47

Closed
wants to merge 4 commits into from

Conversation

kjappelbaum
Copy link
Contributor

@kjappelbaum kjappelbaum commented Sep 4, 2021

  • Formally, standardization should only done on the training data (discussed nicely in Elements of Statistical Learning). For this reason, I would maybe do the split before the standardization.
  • I'd perhaps be a bit less general with "L2 gives a better model and L1 gives a more interpretable result by zeroing features" as L1 also needs some tuning of lambda to make things really zero. Do you have some reference that shows that L2 give better performance than L1? My intuition would have been "it depends"
  • an awesome resource for a deeper dive into model selection is https://arxiv.org/abs/1811.12808

@kjappelbaum kjappelbaum mentioned this pull request Sep 21, 2021
3 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant