New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MRG] Improve the VAR module with: i) regression tests against stats model, ii) selecting order and iii) improved lags handling #46
Conversation
To handle lags of the VAR model and easily forming the companion matrix of the VAR matrices, we will need to see if this issue can be resolved in scipy: scipy/scipy#13124 This would form the multivariate block-companion matrix that can then be leverage to check stability of the VAR model. This would be a useful attribute to expose to the users in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm going to stop where I am so far, because I'm noticing a lot of uncovered lines. Is codecov wrong here? If not, then I think there is a lot of code that should either be cut out, or tested properly if possible
Ah sorry about that. Was just hoping to get your thoughts on the current implementation ported over from |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the code you copied from statsmodels should be cleaned with the mne rigid code style requirements.
thx !
Co-authored-by: Alexandre Gramfort <alexandre.gramfort@m4x.org>
Kay I removed a bunch of the unnecessary code that took from statsmodels and added a few more tests. Checking CI now. |
Co-authored-by: Eric Larson <larson.eric.d@gmail.com>
can you give me evidence that it works? did you test on simulations? do you see a speed or memory gain? |
Current memory profiler output:
With a for loop over samples (Tbd): |
sorry can you unpack this?
… |
The total RAM usage is basically comparable with that of statsmodels (a bit lower overall probably cuz we're not storing as many variables). In #28 (comment) though you suggest using a for loop to prevent high RAM usage with the large number of n_chs/n_samples. I'm not sure how to best incorporate that suggestion with |
for loop over lags is ok I think.
… |
The for loop over the samples is actually considerably slower and actually for some reason takes a lot more memory. This is done by running
|
If that looks okay, then I can revert to the original method and clean up the code, and then keep the |
Running When we do a for loop over samples and lags, the code is significantly slower, so I think it's best to stick with a similar algorithm that statsmodels uses, so this is good to go now from my end. Once @agramfort you approve this, I can remove the I also as a result removed scikit-learn from our dependencies |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
…model, ii) selecting order and iii) improved lags handling (mne-tools#46) * Improve VAR ram usage * Adding updated functionality from statsmodels * Fix * Fix whatsnew * Clean rebase * Clean rebase * Clean up * Adding utils function for block companion * Adding companion matrix formulation * Nest imports * Fix docs * Apply suggestions from code review Co-authored-by: Alexandre Gramfort <alexandre.gramfort@m4x.org> * Fix coverage * Fix coverage * Fix unit tests * Update mne_connectivity/base.py Co-authored-by: Eric Larson <larson.eric.d@gmail.com> * Try again * Fix unit tests coverage * Fix * Fix * Adding becnhmarks * Fixing benchmarks * Try again * Add benchmarks to date * Fix manifest * Fix manifest * Clean up sklearn Co-authored-by: Alexandre Gramfort <alexandre.gramfort@m4x.org> Co-authored-by: Eric Larson <larson.eric.d@gmail.com>
PR Description
Fixes: #45
Fixes: #28
TODO:
Consolidate way of handling lags in the dataset when storing as a Connectivity objectMerge checklist
Maintainer, please confirm the following before merging: