Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VisuCoreTransposition not corresponding to RECO_transposition in version 5.1 #11

Open
SebastianoF opened this issue Jul 28, 2017 · 17 comments

Comments

@SebastianoF
Copy link
Owner

SebastianoF commented Jul 28, 2017

VisuCoreTransposition should reflect the value of RECO_transposition. In some cases in version 5.1 this does not happen. This Bruker ParaVision feature, can cause problems in the re-orientation. (XY, switched with YX when RECO_transposition > 0). Some system to find the RECO_transposition from the visu_pars file alone needs to be found.
Thanks to @neurolabusc for pointing this out.

@gdevenyi
Copy link
Contributor

Indeed, in the PV5.1 data I have VisuCoreTransposition doesn't exist in visu_pairs

I have lots of data and access to a PV5.1 manual and console, happy to share anything.

@neurolabusc
Copy link

Note images from @gdevenyi are already included with bruker2nifti_qa. I think a large dataset should allow a dedicated developer to decipher how the formats have changed over time and a stable method to extract spatial coordinates regardless of the vintage of the data. As a professional vendor, it is unfortunate that Bruker did not document this. If anyone wants to tackle this, they could generate a pull request to support this.

@gdevenyi
Copy link
Contributor

I now have PV5 and PV6.1 on the same computer attached to the same MRI. I can conceivably do some head-to-head comparisons of the same sequences collected.

Any requests?

@SebastianoF
Copy link
Owner Author

SebastianoF commented May 15, 2019

Oh, well... Thanks!!
Do you think an example of a 2d and a 3d sequence with both versions for the same sample (where left and right are distinguishable) may be of some help for further testing?

We should also involve the bruker2nifti_qa maintainers in this, who may like to host the example with the current dataset (I raised an issue pointing at this page here).

Thanks again for the offer!

@SebastianoF
Copy link
Owner Author

Morning @gdevenyi,
Good news from the bruker2nifti_qa side. Please follow up on this thread. @naveau has good insight into possible sequences for comparisons.
Thanks!

@neurolabusc
Copy link

@gdevenyi might be nice to have diffusion images with one set where slices are aligned orthogonal to scanner bore and another where they are rotated. Would be nice to have the NIfTI files exported by ParaVision 360 1.1. I am hoping that Bruker is now able to convert their proprietary format directly to standard formats, obviating the need for 3rd party tools that rely on reverse engineering.

@cecilyen may also have ideas.

Do the Bruker files record if the participant was scanner prone versus supine? Do they encode whether the participant is a biped or a quadruped (similar to DICOM)? Since NIfTI expects coordinates in the frame of reference of the participant, this would be useful to know.

@naveau
Copy link

naveau commented May 16, 2019

On PV6.0.1, the patient position is reported in these files :

subject: ##$SUBJECT_position=SUBJ_POS_Prone
1/acqp: ##$ACQ_patient_pos=Head_Prone
1/visu_pars: ##$VisuSubjectPosition=Head_Prone

The type of participant is reported here :

subject: ##$SUBJECT_type=Quadruped
1/visu_pars: ##$VisuSubjectType=Quadruped
1/pdata/1/visu_pars: ##$VisuSubjectType=Quadruped

@dvm-shlee
Copy link

dvm-shlee commented May 10, 2020

@cecilyen, @neurolabusc, @SebastianoF @gdevenyi
Hi All, I am an animal fMRI researcher at UNC at Chapel Hill in the US and has been also developed my own Bruker to NifTi converter for internal use in our group. Recently, I was working on some projects need to use multimodal images, and had struggled to make correct orientation among the images, and other converters also have resolved the issue yet, so I decided to rebuild my code from scratch during COVID-19 pandemic (which I might not start this work if I knew Paravision 360 already support NifTi conversion since our group planning to upgrade it).

Anyway, I just want to let you know that I've made some progression to resolve the issue related to the incorrect patient position. But only tested with the data in our group. So far, all tested data had been showing good orientation with each other even with an oblique FOV and different slice axis across different modalities and acquisition methods.

I had figured out this with lots of trials and errors, so actually I cannot explain more detail regarding what I've done here, but briefly, I found the parameter keys causing those issues in PV6.0.1 particularly, PVM_SPackArrGradOrient and VisuCoreDiskSliceOrder. The first one is assumed to control the gradient orientation at the hidden layer that only PV6.0.1 related. The VisuCoreDiskSliceOrder is related to the 3D acquisition which is also described briefly in manual, and if this is set to disk_reverse_slice_order, the image will be flipped on the slice plane.

As up now, I think I resolved the issue and hope to ask your help if it really works well or not with other images. link: GitHub. If I can get your feedback, it would be really appreciated.

BTW, for your information, regarding the subject position, my code considers Supine as a Prone position in the case of the Quadruped. As you already know, because Paravision does not handle that well and showing an inverted DV image with Prone. I tried to take into account all possible positions as much as possible, but the different cases had not been tested because I don't have sample data. If it works fine, the A-P, L-R, and S-I will be accommodated the subject orientation.

@SebastianoF
Copy link
Owner Author

SebastianoF commented May 11, 2020

Hi @dvm-shlee !
Thanks for your contribution. I see that you too have:

resolve the issue related to the incorrect patient position. But only tested with the data in our group.

and

if it really works well or not with other images.

This summarizes really well why this issue is still open. It worked on my data, but I can not be sure it works for different acquisition parametrisation.

A possible benchmark dataset is provided by Michael Naveau with bruker2nifti_qa, that you can use to test your solution.

Though, an ideal pathway to unify the conversions in to a standard would be to compare the formulas of the conversions underpinning the code, rather than comparing the outcome of the code on different datasets.

Unfortunately, I have no more access to Bruker data (I changed job... twice), so I can help on this no more. I leave it to you to test. Also, if you, or anyone is keen on taking over the maintenance of bruker2nifti, I would be happy to transfer the repo, or point at its fork, if you consider this is still worth doing after the official converter had been released.

@SebastianoF
Copy link
Owner Author

P.S. I added your converter to the list of open source available converters in my code documentation: https://github.com/SebastianoF/bruker2nifti/wiki/References

@neurolabusc
Copy link

@dvm-shlee I have updated my moribund Bru2Nii to link to your new work. Since my site no longer has Bruker equipment and Sebastiano has moved on, the mantle for supporting this poorly documented, internally contradictory, buggy and evolving "format" (in the loosest sense of the word) has passed to you. I earnestly feel the scientific community needs someone in this role. Thank you for your service. Hopefully the direct support for NIfTI in Paravision 360 will reduce the burden on you in the long term.

@dvm-shlee
Copy link

dvm-shlee commented May 11, 2020

@neurolabusc Thank you for all your advice, and really sad to hear that your site no longer maintaining Bruker equipment and Dr. Ferraris has left this field. I also feel the same that the PVdataset format is annoying to access their data a lot, hope they did not change internally again to make it hard to maintain the third-party tools. The reason I've done this recently is due to the COVID-19 stop all of my animal research while I haven't expected they added NifTi converter on their new software. But since the Paravision 360 requiring the new hardware AvanceNeo which most of the groups in the academic field are not easy to upgrade soon.

I think to maintain the third-party converter is still required for the researcher who needs to reuse their old data, needs to access FID data and acquisition to develop reconstruction tool, or need to convert multiple datasets with some automated manner (which is once of my priority). Since your group had been accumulated lots of effort on this even you are no longer have Bruker, I will try to accommodate your effort into my repo as much as possible, and will not miss granting the credit for all your prior works.

@SebastianoF Thank you for really helpful advice and for sharing your idea including your email. First, I haven't even submitted this work to the paper, but I just prepared a draft because I don't want to waste my effort on just providing converting tool to our internal collaborator. And actually, your previous efforts in sharing your work with the community inspired me a lot recently.

Sorry for you if you have any feeling that I've spooned anything from your prior works and does not put any credit of your work on my manuscript (again it is just a draft, and I will accommodate all your feedback as much as possible), but I really haven't take a look much the other people works before COVID-19 pandemic since I start made my first converter 2015 for my own data conversion when our magnet quenched. Frankly, I found your code around 2018 which I hoped that the issues were resolved by others, unfortunately, it was not. But really did not take much about the active issues because just I was not familiar with the Github system at that time. So that time I just decided to start updating my code to Github which inspired by you for better version control since I was struggled to maintain my code when other colleagues in our center who use my code to convert their data had complained about my code in terms of stability a lot link for the old version. In addition to that, the performance of my code was relatively naive compared to this repo in terms of converting other modality than EPI.

Recently I have realized that you have been published your work in 2017 and build a great community to initiating the collaboration. This is the main reason I first post on this thread to get your feedback related to this issue, before sharing or submit this to the public domain. (my original plan was to share this code to experimental fMRI colleagues who using Bruker scanner for making standard pipeline, so only other I've been contacted so far is BIDS community since I have been focussed on the data organization more.).

Regarding the modality, I've tested pretty many cases using the data in our center which I had issues so far for conversion for internal testing which I cannot share to the public. But I'm sure that within-session, whatever your parameter is, the coordinate space will be the same (magnet center is image center), and I'm confident about this (but it would be also appreciated if there any dataset conflict with my code.). The one that I mentioned 'I was not sure' is the accommodation of species and position related to the parameter. It handled at the last layer of converting affine in my code (code link) by rotating the affine matrix based on the animal position. so unless Bruker changes their way to construct an affine matrix, it would be work fine in terms of preserving the orientation within the same session. If some conditions wrong, I just need to change the parameter to rotate the affine matrix. So the feedback from the other site would be greatly appreciated.

Regarding your feedback on python module design, really appreciated your feedback. I actually recently started feeling it in a similar manner, so the fMRI analysis tool is developed as an extension module to use the brkraw module as an API to access data the link as well as all other projects I have been working on, which make me spend a lot of time to reorganize my repositories (the reason I removed and recreated lots of my repository) but after that, the productivity was improved a lot. But really appreciate your attitude for sharing your know-how. Especially, your way to describe as Lego helped me a lot to build a clear concept on that for my future project! I'm really sad that the talented researcher like you is no longer in the same field (preclinical MRI). Again I really appreciate a lot of valuable comments and advice. Not sure I can publish this work soon to JOSS since the submission process is suspended, but if you have any comments on my manuscript will be greatly appreciated.

@dvm-shlee
Copy link

dvm-shlee commented May 11, 2020

@SebastianoF Again, thank you for your advice regarding testing the QA dataset shared by @naveau. It actually reveals three issues from the dataset I haven't tested before but still promising compared to the original issue I have struggled with. Below is the result of the conversion, and I make it transparent to overlap each other within the session, and the last image for between session. And also summarize the issue of my code still need to be addressed at the end of this comment.

QA

The first issue was the case of using advanced slicing, which I need some additional correction of my code to dealing with multiple slice packs dataset. (detected from Cyceron datasets scan#1 localizer multi-slices). The second issue may be related to the multi-echo and multi-rep cases (detected from Cyceron Multiecho dataset, scan#2, 5, 6, which I haven't have any experience to deal with it. These two issues may be corrected with the test dataset, I really appreciated with this @naveau I'm expecting above will be also not performed well in PV6.0.1 dataset, so if anyone has these, it would be really helpful for troubleshooting.

The last issue seems just my mistake regarding the data sliced by Y-axis which is showed on scan# 5 and 9 in 20130412_APM_DEV_orient. I will correct this soon.

This is really helpful dataset! Not sure you are still interested in this issue, but I will update here if I made any progression for other researchers.

Also if there are any dataset had issues before so need to be tested, please let me know, I will try to check if those issues could be resolved with my code.

Thanks!

@dvm-shlee
Copy link

Bruker2Nifti_QA_challenge

I've just corrected all the issues, All Bruker2Nifti_QA data converted with corrected orientation,
centered at the magnet center.
The only thing I struggled with was intensity slope correction for reconstructed DTI. So please let me know if you have any advice on that.

The reconstructed DTI (RecoID 2) of scan# 2 and 3, has 22 frames, but the VisuCoreDataSlope has 110 values, do you have any clue how to deal with this? I'm not familiar with the DTI image, so I need help if this is a crucial step need to be applied.

@SebastianoF
Copy link
Owner Author

Sorry for you if you have any feeling that I've spooned anything from your prior works and does not put any credit of your work on my manuscript.

Not at all!! The code is there to be used, and I do not need credit, and I do not have any feeling at all either. 🙈

The point was for you to not re-do anything that is already there, though sometimes re-doing things already done is the best way to learn and to improve open source code.

About the VisuCoreDataSlope question, I think I had a similar issue. Though in my dataset I have not been correcting for the slope, which was how I solved it.

I'll let you know here if something else comes up to my mind!

@SebastianoF
Copy link
Owner Author

SebastianoF commented May 13, 2020

Ah, yes, I could omit the slope because all the values in VisuCoreDataSlope had the same magnitude.

If this is not your case, then you can ask the question on a new issue thread (or maybe on stackoverflow). Here it may not be very visible. sorry I can not give you any further help on this matter.

@gdevenyi
Copy link
Contributor

This is amazing work. Great job!

Here I offer a new phantom I have developed, and a scan on PV5.1. I have PV 6 now, and I intend to run a bunch of scans against this in PV5 and PV6, but... COVID. It'll happen eventually.

https://we.tl/t-Sgi1bHflNQ

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants