Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

Already on GitHub? Sign in to your account

[ENH]: add InfomaxICA object and function #3379

Open
dengemann opened this Issue Jul 14, 2014 · 5 comments

Comments

Projects
None yet
4 participants
Contributor

dengemann commented Jul 14, 2014

We recently added a pure Numpy implementation of the ICA infomax algorithm to mne-python.

https://github.com/mne-tools/mne-python/blob/master/mne/preprocessing/infomax_.py
https://github.com/mne-tools/mne-python/blob/master/mne/preprocessing/tests/test_infomax.py

It should not be too difficult to include it in sklearn since we already have tests that are adapted from the FastICA tests.

If people are interested we could start discussing the API.

cc @GaelVaroquaux @ogrisel @agramfort

Owner

GaelVaroquaux commented Jul 14, 2014

Awesome. Go for it!

Contributor

KamalakerDadi commented Oct 19, 2015

After discussions with Denis, I am taking over this idea to port MNE-Python files to Scikit Learn.

Links:

Main codebase:
https://github.com/mne-tools/mne-python/blob/master/mne/preprocessing/infomax_.py

Tests:
https://github.com/mne-tools/mne-python/blob/master/mne/preprocessing/tests/test_infomax.py

Work goes like this:

  • I will port the code files into scikit learn decomposition repository and clean the code to make it scikit learn compatible.
  • Add about documentation and example.

Most useful for me is that, I will mostly inspire from the convention used by fastica_.py to get into code base and understand it to clean the code.

@dengemann any comments are welcome.

I am also open to look for any easy issues.

Contributor

dengemann commented Oct 19, 2015

Great, it's a perfect moment, with @jmontoyam we refactored the info max code and fixed a couple of bugs over the last months, it should now give the same results as the canonical MATLAB implementation shipped in EEGLAB.
See also thise PR:
mne-tools/mne-python#2460 (comment)

So it's ineed a great moment to port it to sklearn.

Contributor

KamalakerDadi commented Oct 20, 2015

@dengemann Do you think we need to change l_rate and to be prefixed to l_rate=0.001 ?
Current design:

l_rate = 0.01 / math.log(n_features ** 2.0)

This has possibility to zero when n_features is 1 for instance.

Owner

GaelVaroquaux commented Oct 20, 2015

l_rate = 0.01 / math.log(n_features ** 2.0)

This has possibility to zero when n_features is 1 for instance.

Then we can use (n_features ** 2 + 1) (no "2.0", as we what to use a
square function, and not and exponential function in the code path).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment