New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Power map #672

Closed
wants to merge 6 commits into
base: master
from

Conversation

Projects
None yet
5 participants
@sinkpoint
Contributor

sinkpoint commented Jul 1, 2015

This is the code for the anisotropic power maps.

It's WIP since the result has not yet been verified with Flavio Dell'Acqua's original data.

Reference:
Dell’Acqua, F., Lacerda, L., Catani, M., Simmons, A., 2014. Anisotropic Power Maps: A diffusion contrast to reveal low anisotropy tissues from HARDI data, in: Proceedings of International Society for Magnetic Resonance in Medicine. Milan, Italy.

def single_L_ap(coeffs_mtx, L=2, power=2):
n_start = 1
n_L = 2*L+1
for l in xrange(2,L,2):

This comment has been minimized.

@arokem

arokem Jul 2, 2015

Member

To make this Python3 compatible, you will need to add the line:

from dipy.utils.six.moves import xrange

At the top of this file.

""" Calculates anisotropic power map with a given SH coeffecient matrix
References
__________

This comment has been minimized.

@arokem

arokem Jul 2, 2015

Member

This should be a line of dashes ('-'), rather than a line of underscores ('_')

@arokem

This comment has been minimized.

Member

arokem commented Jul 2, 2015

Looks interesting. Since this is a fairly new method, would you add a 'Notes' section in the docstring that roughly explains what this metric is (how it's calculated)? I know that I can go read the ISMRM abstract, but I think that it would be helpful to give some sense of the calculation.

@sinkpoint

This comment has been minimized.

Contributor

sinkpoint commented Jul 8, 2015

I made the necessary changes. Please review. Thanks.

@@ -965,3 +965,82 @@ def sh_to_sf_matrix(sphere, sh_order, basis_type=None, return_inv=True, smooth=0
return B.T, invB.T
return B.T
def sh_to_ap(coeffs_mtx, normal_factor=0.00001):
""" Calculates anisotropic power map with a given SH coeffecient matrix

This comment has been minimized.

@arokem

arokem Jul 9, 2015

Member

typo in "coefficient"

Notes
----------
Calculate AP image based on a IxJxKxC SH coeffecient matrix based on the equation:
$$ AP = \sum_{l=2,4,6,...}{\frac{1}{2l+1} \sum_{m=-l}{|a_{l,m}|^2}} $$

This comment has been minimized.

@arokem

arokem Jul 9, 2015

Member

I don't think you need two $$ here for the equation to be rendered properly. An alternative is to use the rst math directive:

.. math::
    AP = \sum_{l=2,4,6,...}{\frac{1}{2l+1} \sum_{m=-l}{|a_{l,m}|^2}} 
return ap_i
ap = np.zeros(dim)
#print 'AP shape:',ap.shape

This comment has been minimized.

@arokem

arokem Jul 9, 2015

Member

You can remove this debugging statement

while sum_n < n_coeffs:
n_L = 2*L+1
#print L, n_L, sum_n, sum_n+n_L

This comment has been minimized.

@arokem

arokem Jul 9, 2015

Member

Same here

6.09002332e-05, -2.35245883e-04, 1.33029383e-03,
9.53928674e-04, 4.53461286e-04, -8.50660289e-04]])
answers = [0.0, 3.4198238120739317, 5.2417375088492255]

This comment has been minimized.

@arokem

arokem Jul 9, 2015

Member

Where did you get these numbers from?

This comment has been minimized.

@arokem

arokem Jul 9, 2015

Member

Might be good to add a test using trivial data to produce a trivial result. This works as a regression test, but it's not a great test of correctness. Ideally, you want something that can be verified with a pen and paper, or through a simulation.

This comment has been minimized.

@sinkpoint

sinkpoint Jul 14, 2015

Contributor

The data is from 3 voxels around the corpus callosum on a sample image.
Ideally I should have some data from the AP method author to double check that it matches his dataset, but this is the best I have.

This comment has been minimized.

@arokem

arokem Jul 14, 2015

Member

That would be great, if you could get some confirmation that this matches results with other implementations.

The current test provides a test for regressions and across different platforms. Is there some way to also test it for correctness? That is, is there some simple case in which you know exactly what the result would be? For example, if you were testing for a calculation of MD, you might simulate data from a tensor with eigenvalues 1.5, 0.5, 0,5, in which case you would expect the model and calculation to derive an MD of approximately 0.83. Is there an equivalent test you could do here?

@arokem

This comment has been minimized.

Member

arokem commented Jul 9, 2015

BTW - I am confused. What happened to the python 3 error? It doesn't seem that you added an import to xrange from the six module.

@sinkpoint

This comment has been minimized.

Contributor

sinkpoint commented Jul 14, 2015

The xrange import is on line 1008

@arokem

This comment has been minimized.

Member

arokem commented Jul 14, 2015

Oh - I see it now. Please move it to the top of the file.

@arokem

This comment has been minimized.

Member

arokem commented Jul 29, 2015

Hi @sinkpoint - just checking back in about this. We would love to have this feature in, but I think that it still needs an additional small effort. Have you had a chance to take another look?

@arokem

This comment has been minimized.

Member

arokem commented Aug 17, 2015

@sinkpoint - any further thoughts here?

@sinkpoint

This comment has been minimized.

Contributor

sinkpoint commented Aug 17, 2015

I've just came back from vacation.
The method author have not yet gotten back to me about the data.
I will inquire again. Until then I don't have much further to add.

@arokem

This comment has been minimized.

Member

arokem commented Aug 18, 2015

Great! Thanks for looking into that.

def sh_to_ap(coeffs_mtx, normal_factor=0.00001):
""" Calculates anisotropic power map with a given SH coefficient matrix

This comment has been minimized.

@mdesco

mdesco Aug 18, 2015

Contributor

why not apm as acronym or a_power_map. Not a big fan of ap

This comment has been minimized.

@Garyfallidis

Garyfallidis Aug 18, 2015

Member

I would write sh_to_anisotropic_power if ap is misleading. Or just say

def anisotropic_power(

without the sh. I think that is the best. We don't need to continue using the sh_to thing for the AP too.

And yes as @mdesco suggests the coeffs_mtrx name has to change. Here is where you can say sh.

This comment has been minimized.

@arokem

arokem Aug 21, 2015

Member

And just to chime in: this docstring is not in the right order:

Parameters => Returns => Notes

Please!

Notes
----------
Calculate AP image based on a IxJxKxC SH coeffecient matrix based on the equation:
.. math::

This comment has been minimized.

@mdesco

mdesco Aug 18, 2015

Contributor

... based on a XxYxZxC image of SH coefficients ...
The word matrix is confusing. You basically have a vector of SH coeffs at every voxel.

As I was reading your doc, I was a bit confused at first.

.. [1] Dell’Acqua, F., Lacerda, L., Catani, M., Simmons, A., 2014.
Anisotropic Power Maps: A diffusion contrast to reveal low anisotropy tissues from HARDI data,
in: Proceedings of International Society for Magnetic Resonance in Medicine. Milan, Italy.

This comment has been minimized.

@mdesco

mdesco Aug 18, 2015

Contributor

your symbol ' makes my python fail when I call the function...

File "/Users/desm2239/Research/Source/dipy/dipy/reconst/shm.py", line 990
SyntaxError: Non-ASCII character '\xe2' in file /Users/desm2239/Research/Source/dipy/dipy/reconst/shm.py on line 991, but no encoding declared; see http://python.org/dev/peps/pep-0263/ for details

This comment has been minimized.

@arokem

arokem Aug 21, 2015

Member

Yep - I believe that's the apostrophe in Flavio's name. Dell =>`<= Acqua.

This is probably because the author list was copied over from non-ascii document (e.g. a pdf with the conference proceedings...).

Please delete that one and put a standard ascii apostrophe.

----------
coeffs_mtx : ndarray
A ndarray where the last dimension is the
SH coeff estimates for that voxel.

This comment has been minimized.

@mdesco

mdesco Aug 18, 2015

Contributor

not a fan of coeffs_mtx as a choice of argument. To me, this is not a SH matrix. SH matrix is confusing with the SH matrix of SH coefficients of the basis. This is the vector of SH coeffs estimates, as you mention

@mdesco

This comment has been minimized.

Contributor

mdesco commented Aug 18, 2015

Testing right now on a full brain dataset.

I'm skyping with Flavio on Friday. I could ask him to test this. You want him to try your test in test_shm?

@arokem

This comment has been minimized.

Member

arokem commented Aug 18, 2015

It would be great if you could ask him for advice on how to test this. That is, what would he expect in data generated from simple case (e.g. from a tensor with e1=1.5 adn e2=e3=0.5)? Is there an analytic way to calculate what AP would be in that case?

@mdesco

This comment has been minimized.

Contributor

mdesco commented Aug 18, 2015

Ok. will do. But when I look at the equation I don't see an analytical solution with the log and the normalization factor.

@sinkpoint, why do you fix the normalization factor to 1e-5. It should be normalized by APref following Flavio's abstract.

where APref can be a reference power (e.g. a fully anisotropic tensor [2.0 0.0 0.0] x10-3 mm2/s).

@sinkpoint

This comment has been minimized.

Contributor

sinkpoint commented Aug 18, 2015

@mdesco, the normalization factor is based on Flavio's latest code that he showed me.
I had some trouble figuring out what APref should be, and 1e-5 is what he uses.

@mdesco

This comment has been minimized.

Contributor

mdesco commented Aug 18, 2015

Humm... I will discuss this with Flavio. Would be a cleaner solution it
this was a data driven normalization factor, a bit like the response
function in deconvolution...

On Mon, Aug 17, 2015 at 10:19 PM, David Qixiang Chen <
notifications@github.com> wrote:

@mdesco https://github.com/mdesco, the normalization factor is based on
Flavio's latest code that he showed me.
I had some trouble figuring out what APref should be, and 1e-5 is what he
uses.


Reply to this email directly or view it on GitHub
https://github.com/nipy/dipy/pull/672#issuecomment-132038263.[image:
Web Bug from
https://github.com/notifications/beacon/ACEgB1FYKB-aKEvT6JSWvHErnVKo1cn6ks5ooo2egaJpZM4FQRBf.gif]

@mdesco

This comment has been minimized.

Contributor

mdesco commented Aug 18, 2015

I have tested the sh_to_ap command on different datasets and I have to say that it is quite impressive and works well. Very robust and creates nice looking T1-like maps...

@mdesco

This comment has been minimized.

Contributor

mdesco commented Aug 21, 2015

So, here are the suggestions from Flavio and I:

  1. Give ourselves a SH vector, anything, say, sh_true = [1, 10, 11, 12, 13, 14, 20, 21, 22, 23, 24, 25, 26, 27, 28]. Then, you know what the sums over the 0 order, 2nd order, 4th order, etc., terms. So, you know what the true power map should be. Then, you assert with sh_to_pa.
  2. Another thing to be careful about is to have a valid mask to compute the power map. Otherwise, negative values and zeros will happen and taking the log will make everything go crazy.
  3. Also, I remember @MrBago helping me a lot with loops and playing with iterations over l, m and sh orders. I think the main for loop could be "prettier"

cheers

@arokem

This comment has been minimized.

Member

arokem commented Aug 21, 2015

Thanks @mdesco for conferring with Flavio and for the suggestion on testing. I still need to think through it a bit, but @sinkpoint - please let me know if you need help implementing this suggestion.

@sinkpoint

This comment has been minimized.

Contributor

sinkpoint commented Aug 24, 2015

I performed the code style suggestions.
I'm not sure I understood the testing suggestions. What I'm doing in testing seems to be exactly what is suggested. There are 3 sets of known SH vectors, and I assert the method result with what is known.

in: Proceedings of International Society for Magnetic Resonance in Medicine. Milan, Italy.
"""
from dipy.utils.six.moves import xrange

This comment has been minimized.

@arokem

arokem Aug 24, 2015

Member

Could you please move this to the imports at the top of the file?

sinkpoint added some commits Jul 1, 2015

- changes according to pull request
- added description of calculations
additional pull request 672 changes
- changed some spelling and equation formatting mistakes
- changed sh_to_ap method name to anisotropic_power
- changed param coeff_mtx to sh_coeffs
- changed method description order
- changed apostrophe character for Dell'Acqua
- using range instead of xrange
- changed formatting to conform to pep8

@sinkpoint sinkpoint force-pushed the sinkpoint:power_map branch from 71fb206 to 94d2736 Sep 15, 2015

@sinkpoint

This comment has been minimized.

Contributor

sinkpoint commented Sep 15, 2015

So I performed the rebase, hopefully it's correct.
I corrected the pep8 formats, and changed to range()

If someone can see if the functions can be additionally vectorized, that'd be helpful.

@arokem

This comment has been minimized.

Member

arokem commented Sep 15, 2015

Thanks! I will take a look at the tests and see if I can propose something there.

@arokem

This comment has been minimized.

Member

arokem commented Sep 15, 2015

Hey @sinkpoint - I have been looking into the code a bit, and thinking of ways to improve the testing, and just in general (e.g. vectorization). After looking at this for a while, I have a couple of questions:

  1. Why did you use np.ma.log, instead of np.log (in the final calculation of log_ap)?
  2. The equation you wrote in the documentation indicates a summation over the powers of the coefficients, but in single_L_ap you take the mean on the last dimension, rather than the sum. This last question also stems from the fact that I cannot access the original Dell'Acqua ISMRM abstract. Maybe you could share that with me somehow (you can email it to "my github user name"@gmail.com)?

Thanks!

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Oct 7, 2015

Hey David! Can you give some feedback about @arokem's refactor? So we can go ahead and merge? Next release is coming soon! :)

@sinkpoint

This comment has been minimized.

Contributor

sinkpoint commented Oct 7, 2015

Hi @arokem , I looked at your code changes, everything looks fine.

  1. I think at some point in time I was playing with masked arrays, this was probably the remnant. It can be safely changed to np.log
  2. There are two steps in the calculation. Since the coeff list is a flattened LxM array, the calculation take the mean of powers across each even L-th level (2l+1 number of elements). Then, you sum across the L dimension.
    It'll be more clear if you look at figure 12 of Frank LR. Characterization of anisotropy in high angular resolution diffusion-weighted MRI. Magn. Reson. Med. 2002;47:1083–1099. doi: 10.1002/mrm.10156.

Also to note, and I think you guys already know, is that dipy does not supply the full list of coefficients, and rather only returns a flatten vector of even L coeffs.

@arokem

This comment has been minimized.

Member

arokem commented Oct 12, 2015

Thanks for taking a look and for sending over the abstract.

The question I had is why the mean, and not the sum (on this line:
https://github.com/nipy/dipy/pull/724/files#diff-a968e0bd2a347e965a3ba717938535f9R1057
)?

The formula that appears in the abstract has summation (followed by
division by 2l+1), but the code you wrote uses mean for the calculation
and then also divides by 2l + 1. I kept the mean in my refactor, but I am
not sure whether it makes since, because it seems to not be a direct
implementation of the equation that appears in the abstract.

On Wed, Oct 7, 2015 at 12:19 PM, David Qixiang Chen <
notifications@github.com> wrote:

Hi @arokem https://github.com/arokem , I looked at your code changes,
everything looks fine.

  1. I think at some point in time I was playing with masked arrays,
    this was probably the remnant. It can be safely changed to np.log
  2. There are two steps in the calculation. Since the coeff list is a
    flattened LxM array, the calculation take the mean of powers across each
    even L-th level (2l+1 number of elements). Then, you sum across the L
    dimension. It'll be more clear if you look at figure 12 of Frank LR.
    Characterization of anisotropy in high angular resolution
    diffusion-weighted MRI. Magn. Reson. Med. 2002;47:1083–1099. doi:
    10.1002/mrm.10156.

Also to note, and I think you guys already know, is that dipy does not
supply the full list of coefficients, and rather only returns a flatten
vector of even L coeffs.


Reply to this email directly or view it on GitHub
#672 (comment).

@sinkpoint

This comment has been minimized.

Contributor

sinkpoint commented Oct 13, 2015

@arokem
Ah right. The mean() is because the inner equation of 1/(2l+1) * SUM(A_l,m) is basically an averaging operation. There are m=2l+1 number of A_l,m elements, so dividing the sum/(2l+1) is the same as mean()

I used the numpy power and mean functions over the m vectors to try to vectorize the operations as much as possible.

@arokem

This comment has been minimized.

Member

arokem commented Oct 13, 2015

But aren't we currently doing both? Taking the mean AND dividing by 2l + 1?

@sinkpoint

This comment has been minimized.

Contributor

sinkpoint commented Oct 13, 2015

Haha, you are right. I forgot to comment out the next line after I took the mental shortcut.
Please remove the divide by 1/(2l+1) line.

@arokem

This comment has been minimized.

Member

arokem commented Oct 13, 2015

Right on. But now the test is failing, because the numbers were generated
with this double-division by 2l + 1 taken into account.

Which brings us back to a previous discussion. Any way to be sure that we
are doing the calculation correctly? Any ground truth for this? For
example, it seems that it should be equal to 0 for a signal generated from
a "fully anisotropic tensor" (as defined in the abstract).

On Tue, Oct 13, 2015 at 1:01 PM, David Qixiang Chen <
notifications@github.com> wrote:

Haha, you are right. I forgot to comment out the next line after I took
the mental shortcut.
Please remove the divide by 1/(2l+1) line.


Reply to this email directly or view it on GitHub
#672 (comment).

@sinkpoint

This comment has been minimized.

Contributor

sinkpoint commented Oct 13, 2015

Yup, the only way to be sure is to have a test dataset from Flavio.
He haven't responded to my previous request, I can ask him again for the dataset.

@deflavio

This comment has been minimized.

deflavio commented Oct 16, 2015

Hey It's Flavio,

Let me clarify a couple of things since I suspect we may not work on the same version of the AP equation or at least not on how will be in the paper...

Just after I submitted the ISMRM abstract I have realised that there is no need for a special reference based on a signal from a fibre. Since the DWI signal is usually normalised to the b0, AP is already normalised so if APnp=ln(AP/Apref) you can simply assume APref=1, so just do the log and forget about that. The only issue with that is that then you have AP now defined on negative values. If that's not a problem for you guys this is the clean solution but I have the feeling that for registration and segmentation having negative values doesn't help.

So, to have more “traditional” positive-valued maps you can instead use as reference APref=10-5 and the effect is the same as shifting to positive values since log(A/B) = log(A)-Log(B). I have chosen this value 10-5 because it looks to me that with all the data I have used this make a reasonable background power. I.e. There is definitely nothing to recover below that power.

To display the maps then I found that visualising between 1 and 10 do the trick to have a T1w like effect.

David for your paper, you may actually want to consider to play around a bit with which “visualisation" range is also optimal for the registration…just an idea.

I'm sending now a test dataset so you can compare the AP maps (with the two references I said) and see if you get the same value.

I hope this helps.

Flavio

@arokem

This comment has been minimized.

Member

arokem commented Oct 16, 2015

Thanks @deflavio for taking the time to pitch in! Your explanation makes a lot of sense - you basically just add log(10e-5) to the entire map, to make it strictly non-negative. And there's nothing 'magical' about it, but it's practical. Seems good.

Mind if I ask how the test data set is generated?

@arokem

This comment has been minimized.

Member

arokem commented Oct 16, 2015

Another question that occurs to me on rereading your comment about the basis of the log you are using. Do you use the natural log ('ln') on the first one, and log with basis 10 ('log') on the second one?

Or in other words, you simply add a 4 to the entire map to make it non-negative?

@sinkpoint

This comment has been minimized.

Contributor

sinkpoint commented Oct 16, 2015

Thanks @deflavio
I also realized that by taking Aref=1, there are negative values in the result. What I did was simply detect the image minimum and shift the entire image to the positive scale by addition. This allows me to not care about tuning the Aref for different scans. In theory this should make the registration compatible with different softwares ( if they don't like negatives ), without affecting the intensity scales.

This is not part of this code whoever, since I thought it's just my own changes. If @arokem want it I can put it.

@arokem

This comment has been minimized.

Member

arokem commented Oct 16, 2015

This would probably fail rather dramatically in cases where some extreme
values appear due to (for example) noise. Is this a plausible failure mode?

On Fri, Oct 16, 2015 at 8:52 AM, David Qixiang Chen <
notifications@github.com> wrote:

Thanks @deflavio https://github.com/deflavio
I also realized that by taking Aref=1, there are negative values in the
result. What I did was simply detect the image minimum and shift the entire
image to the positive scale by addition. This allows me to not care about
tuning the Aref for different scans. In theory this should make the
registration compatible with different softwares ( if they don't like
negatives ), without affecting the intensity scales.

This is not part of this code whoever, since I thought it's just my own
changes. If @arokem https://github.com/arokem want it I can put it.


Reply to this email directly or view it on GitHub
#672 (comment).

@sinkpoint

This comment has been minimized.

Contributor

sinkpoint commented Oct 16, 2015

Potentially, that's why it's just part of my test code.
In my test cases though it has been pretty stable with very small variations on registration results across 20 subjects. This of course was on the double averaging operation. :/

Question is if a scan gives a very extreme value, is that scan still usable to begin with?

@arokem

This comment has been minimized.

Member

arokem commented Oct 16, 2015

Also - your 20 subjects were probably all collected in similar conditions.
I imagine that once we make this available, people will try this on all
kinds of data with artifacts that you might not have in your data. Let's
stick to the relatively conservative version of this; adjust by a fixed
factor (log(10e-5) per default), and then optionally rectify negative
numbers to 0 (just added that as a kwarg, with default set to True - what
do you think about that?).

On Fri, Oct 16, 2015 at 9:08 AM, David Qixiang Chen <
notifications@github.com> wrote:

Potentially, that's why it's just part of my test code.
In my test cases though it has been pretty stable with very small
variations on registration results across 20 subjects. This of course was
on the double averaging operation. :/

Question is if a scan gives a very extreme value, is that scan still
usable to begin with?


Reply to this email directly or view it on GitHub
#672 (comment).

@sinkpoint

This comment has been minimized.

Contributor

sinkpoint commented Oct 16, 2015

ok sounds good

@deflavio

This comment has been minimized.

deflavio commented Oct 16, 2015

@arokem No sorry, there is no basis 10 log. I'm always in natural log (ln(APref)) to be consistent with neper scale and there is no point really to change it.

@sinkpoint About looking for the minimum and then shift I'm not so sure since noise and artefact may change the level every time making all maps slightly different and for sure not compatible across protocols and scanners... and that's not what I really I had in mind with these maps. All the issue about normalisation is about that.

In principle after b0 normalisation all AP maps can be compared even when the number of directions are different. (but that's another story...)

About the data sorry got delayed at work ..sending later.

F

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Oct 19, 2015

Resolved in #672. Thank you all :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment