-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Loss function #25
Loss function #25
Conversation
I think here we may need to distinguish the function for selecting best hyperparameter (the loss function here we want), and the real loss function of our linear regression solver, which would be mean square error. So, I don't think it is necessary to not letting predict return both mse and pearson'r, since they are the necessary metrics to evaluate a linear regression model. |
OLS regression mimizes MSE, but ridge regression does not, unless you use MSE as a loss function which I don't think is a good default for EEG data because of noisy channels. You can evaluate a linear regression model using many different metrics (e.g. MSE, r, varinance explained) and you can derive any metric you want using the prediction from the TRF.predict method |
Replaced the hard coded estimation of r and mse with a loss function. The loss function defaults to one_minus_correlation but can be set to anything that takes y and y_pred as input and retruns a scalar who's value can be minimized to optimize regularization. Also added some minor changes to improve code readability and consistency