You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Before the 2022 March update, the same code works well for both Elastic Net and Lasso method, and both yields beta matrix of appropriate sparsity level. However, after the 2022 March update, I run the same code but yielded different results. While ENET estimate still works fine, Lasso estimate tend to over-penalize and get 99.999% of beta coefficients to be 0.
This over-penalization problem seems to also affect the 2 newly added methods in this upgrade, MCP and SCAD. Both of them also result in super sparse beta matrices.
I suspect you may have changed some codes about cv.BigVAR, particularly about some methods such as Lasso in the recent upgrade. I emailed you earlier with my data attached, just in case you may want to check it yourself.
Thanks a lot for your work!
The text was updated successfully, but these errors were encountered:
I slightly modified the construction of the penalty grid so it may require adjusting the granularity parameter to achieve a comparable level of sparsity.
I don't think I received your email. Could you send your data to wbn8@cornell.edu? I'll take a look at the specific issue.
I slightly modified the construction of the penalty grid so it may require adjusting the granularity parameter to achieve a comparable level of sparsity.
I don't think I received your email. Could you send your data to wbn8@cornell.edu? I'll take a look at the specific issue.
Thanks for the timely response, Will. Yes, I tried to use multiple different granularity settings from (20,10), (50,10), to (150,10). It makes the results marginally better, but still super super sparse.
I just emailed you again, but no hurries about it. I really appreciate it whenever you have time.
Hi Dr. Nicolson, thanks for the new upgrade.
Regarding my code below:
Model1<-constructModel(as.matrix(z),1,"Basic",gran=c(20,10),cv="Rolling")
ENET<-cv.BigVAR(Model1)
Before the 2022 March update, the same code works well for both Elastic Net and Lasso method, and both yields beta matrix of appropriate sparsity level. However, after the 2022 March update, I run the same code but yielded different results. While ENET estimate still works fine, Lasso estimate tend to over-penalize and get 99.999% of beta coefficients to be 0.
This over-penalization problem seems to also affect the 2 newly added methods in this upgrade, MCP and SCAD. Both of them also result in super sparse beta matrices.
I suspect you may have changed some codes about cv.BigVAR, particularly about some methods such as Lasso in the recent upgrade. I emailed you earlier with my data attached, just in case you may want to check it yourself.
Thanks a lot for your work!
The text was updated successfully, but these errors were encountered: