-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MRG: BEM model and solution in Python #2193
Conversation
triple = np.dot(cross, y3) | ||
|
||
# XXX refactor with _get_solids? | ||
l1 = np.sqrt((y1 * y1).sum()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(y1 * y1).sum() -> np.dot(y1, y1)
fix below too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also you use use functions from math module for floats: sqrt, abs ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, the reason I have it in this form is that I am almost certainly going to vectorize the operation, in which case having it in the sum
form will be more convenient to start translating to higher dimensions
It seems to be working for me now, but it is horribly slow. Hopefully I can speed it up tomorrow. |
You are awesome! I am so excited for this! |
FYI I added a couple more files to the testing dataset. BEMs have now been created for ico-2 surfaces (320 instead of 5120 or 1280) which is much faster. The code at this point seems to work for homog 1-layer, I just need to track down a bug in the 3-layer case. Speed isn't too much worse than C for small shells (<= 1280) and actually faster for standard size (5120) because, while calculating the matrix coefficients is slower in Python, the inversion is much faster:
Just need to track down where that bug is, which I probably won't get to until next week. |
Okay, 1- and 3-layer model and solution computation works now. Ready for review/merge from my end. |
Ping @agramfort @dengemann this is ready for review |
@Eric89GXL amazing! How fast is it compared to the original code? |
See comment above, faster for normal use case (3-layer 5120) |
There might be additional ways of optimizing it, too, if there is interest |
I'm already very impressed. Eric what do we need here, any testing desired? |
Hopefully the unit tests are sufficient to make us confident in using it. But sure, if you want to test it out and make comments, it's welcome. |
I will trust you if you're confident ;) |
# | ||
|
||
|
||
def _calc_beta(rk, rk1): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
short comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That would require (re-)understanding what it does first :) but seriously, I'll add one. It also reminds me that I need to check for duplication of functionality across the _compute_forward
module, I think these are different but I should check.
Check out the tests, hopefully those will make us confident :) If they don't, then they can be expanded. |
* This is the general multilayer case | ||
|
||
""" | ||
from scipy import linalg |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why import here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we nest certain scipy imports now to save on import speed, see mne/tests/test_import_nesting.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
right! thanks
@dengemann comments addressed. FYI you asked about speed, and that made me optimize a bit more:
|
Great!!! I will recompute all my BEMs to enjoy the speedy drive :)
|
v2 = tri_rr[np.newaxis, 1, :] - fros | ||
v3 = tri_rr[np.newaxis, 2, :] - fros | ||
triples = _fast_cross_nd_sum(v1, v2, v3) | ||
l1 = np.sqrt(np.sum(v1 * v1, axis=1)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it faster than np.linalg.norm( ..., axis=1) ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes sadly (and axis
is only available on 1.9+), I tested both. I think it depends on your application. Probably for larger matrices norm(..., axis)
would be faster, we'll have to decide on a case-by-case basis.
please update what's new, rebase and +1 for merge ! thanks awesome ! |
Rebased and updated, I'll merge once Travis comes back happy |
Sounds good
|
MRG: BEM model and solution in Python
Thanks for the reviews @dengemann @agramfort |
Super cool, Eric -- amazing! On Mon, Jul 13, 2015 at 10:01 PM, Eric Larson notifications@github.com
|
Closes #2190.
I translated all the code but haven't tested it at all. In fact I know it will not work. But the current plan for usage is something like this: