Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EquispacedMaskFunc doesn't generate equidistant masks #54

Closed
zaccharieramzi opened this issue Aug 6, 2020 · 4 comments
Closed

EquispacedMaskFunc doesn't generate equidistant masks #54

zaccharieramzi opened this issue Aug 6, 2020 · 4 comments

Comments

@zaccharieramzi
Copy link
Contributor

zaccharieramzi commented Aug 6, 2020

Hi,

In the fastMRI paper, we can see that:

For brain, after a random offset from the start, the remaining lines are sampled equidistant from each other with a spacing that achieves the desired acceleration factor.

However, when looking at the masks we have in the data, we can see that the remaining lines are actually not sampled equidistant from each other.
If we look for example at file_brain_AXFLAIR_200_6002441, which has an acceleration factor of 8.
We can see that the lines outside the autocalibration region are actually spaced either by 10 or by 11, and therefore are not equidistant from each other.

I think this is due to the implementation that can be found here. You can see on this line that the rounding might not (and in fact will probably not for acceleration factors that are not integers) preserve the equidistant spacing.

For example, with an acceleration factor of 4 (a center proportion of 8%), with 372 lines to subsample (seed 0), we have the first 4 lines remaining as:

4.,   9.42857143,  14.85714286,  20.28571429

They are perfectly equispaced by the adjusted acceleration factor (5.428571428571429).
After the rounding we have:

4,   9,  15,  20

Now the spacing is either 5 or 6.

I think the problem is that for a given integer acceleration factor, you are not going to have equidistant spacing for all the shapes.

What I did in my implementation (before noticing that the test masks were not following that rule), was to actually fix a number of remaining lines that needed to be placed and determined the spacing between them from that number. However, that means that my acceleration factor is no longer exactly 4, but sometimes lower, sometimes higher.

I am filing this as an issue since the EquispacedMaskFunc class doesn't respect the documentation (or the paper) from what I understand, but I am happy to discuss it in the forum.

@mmuckley
Copy link
Contributor

This is an interesting issue. Certainly from the image-domain SENSE algorithm you'd want the exact spacing, but the inexact spacing in the repo at the moment might actually help things from a compressed sensing/incoherence point of view.

Ultimately though what's in the repo should be targeted most to matching what's in the public test and challenge data sets - those cannot be changed. If they have the same staggered offset characteristics, then we will have to leave this.

@zaccharieramzi
Copy link
Contributor Author

Yes I was actually thinking that from a GRAPPA point of view (since it was also mentioned in the paper).

Currently GRAPPA approaches will fail unless using a tailored implementation with twice (more or less) as many geometries.

I also think in any case it might be nice to clarify that point somewhere. I understand in this case it's a bit more tricky to change the test and challenge set since the values could be directly used (as opposed to the sign issue for the v1 of the knee dataset), but it could be so benefitial for the comparison to other approaches. I have typically been asked a lot about the comparison with GRAPPA and I was hoping to be able to make a public point with the brain dataset (as opposed to using my homemade masks).

I think if you don't plan to take any action though, we can close this.

@mmuckley
Copy link
Contributor

Yeah - of course people on the fastMRI team have also used GRAPPA.

I just opened file_brain_AXT2_210_6001948.h5 - also from the test set - and I'm also observing this behavior there.

I will add a note to the function that due to its implementation some customization of GRAPPA methods will be necessary for the challenge set. I'll also relay this to the team about the challenge data in case we want to make any changes.

@mmuckley mmuckley linked a pull request Aug 14, 2020 that will close this issue
@mmuckley mmuckley removed a link to a pull request Aug 14, 2020
@zaccharieramzi
Copy link
Contributor Author

Yes I actually read that paper and it added to the confusion when seeing this "problem". I actually sent an e-mail to one of the authors, @tullie , who forwarded it to @anuroopsriram , to understand more about what they did to tackle this issue of custom GRAPPA.

Closing this, since I don't think any more action is required.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants