-
-
Notifications
You must be signed in to change notification settings - Fork 381
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
antsApplyTransformsToPoints loads CSV points as nan #733
Comments
On Sun, Mar 10, 2019 at 6:05 PM Jirka Borovec ***@***.***> wrote:
*Describe the bug*
I have some difficulty with transforming points according to the estimated
registration transform.
*To Reproduce*
Steps to reproduce the behaviour:
1. using binary - ANTs-1.9.v4-Linux.tar.gz from
https://sourceforge.net/projects/advants/files/ANTS/ANTS_Latest/
2. the image registration works fine as well as warping points
3. the input CSV file
x,y
62,290
75,408
78,470
119,226
125,321
...
1. warping command
/Applications/ANTs-1.9.v4-Linux/bin/antsApplyTransformsToPoints --dimensionality 2 --input /results/BmANTs/0/Rat_Kidney_PanCytokeratin.csv --output /results/BmANTs/0/Rat_Kidney_PanCytokeratin.csv --transform [/results/BmANTs/0/trans0GenericAffine.mat, 1] -t /results/BmANTs/0/trans1InverseWarp.nii.gz
1. logging message
Input csv file: /results/BmANTs/0/Rat_Kidney_PanCytokeratin.csv
=============================================================================
The composite transform is comprised of the following transforms (in order):
1. /results/BmANTs/0/trans1InverseWarp.nii.gz (type = DisplacementFieldTransform)
2. inverse of /results/BmANTs/0/trans0GenericAffine.mat (type = AffineTransform)
=============================================================================
point-in = [nan, nan] point-out = [nan, nan]
point-in = [nan, nan] point-out = [nan, nan]
point-in = [nan, nan] point-out = [nan, nan]
point-in = [nan, nan] point-out = [nan, nan]
point-in = [nan, nan] point-out = [nan, nan]
...
*Expected behavior*
I am wondering why the loaded CSV have only NAN as input? What is the
supported CSV format?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#733>, or mute the thread
<https://github.com/notifications/unsubscribe-auth/AATyfpgGE-5OX-E1XYTT14vwk31OmSs6ks5vVYG4gaJpZM4bniLf>
.
--
brian
|
thx for such your fast reply. I have tried the sample code from the repo
any idea what I am doing wrong? |
Guess: old version with bug
On Sun, Mar 10, 2019 at 6:27 PM Jirka Borovec ***@***.***> wrote:
thx for such your fast reply. I have tried the sample code from the repo
chicken-master$
~/Applications/ANTs-1.9.v4-Linux/bin/antsApplyTransformsToPoints -d 2 -i
./data/chicken-3-ref.csv -o testq2.csv -t [ data/chicken3to4-ref.mat , 1]
and got very similar results
Input csv file: ./data/chicken-3-ref.csv
=============================================================================
The composite transform is comprised of the following transforms (in order):
1. inverse of data/chicken3to4-ref.mat (type = AffineTransform)
=============================================================================
point-in = [nan, nan] point-out = [nan, nan]
point-in = [nan, nan] point-out = [nan, nan]
Output warped points to csv file: testq2.csv
any idea what I am doing wrong?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#733 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AATyfhFtp4aztv03qPR68JzZzjFaNkNsks5vVYbjgaJpZM4bniLf>
.
--
brian
|
@stnava could you please extend example with points warping especially what is the transformation order? My case with registration:
and warping image and points
|
Hi @Borda , Try running the runthis.sh script. It contains calls to both |
The script is if fine showing the capabilities, but it is not very educative since it does not talk about the options - if they are the only one... it seems that the input image can be also jpg, but the output can be only Nifty. |
This looks correct to me, except that you are overwriting the input with the output, which I would not recommend. The input points should be in ITK physical space and in the format "x,y,z,t". Since your data is 2D, the last two dimensions here would be zero. If you open your image in ITK-SNAP, you can go to Tools -> Image Information, which will show you the ITK coordinates of a point in the image. |
Yes and antsr or antspy are Probably better at this
On Mon, Mar 11, 2019 at 4:10 PM Philip Cook ***@***.***> wrote:
/antsbin/bin/antsApplyTransforms
--dimensionality 2
--input /results/BmANTs/1/Izd2-29-041-w35-proSPC.nii
--output /results/BmANTs/1/Izd2-29-041-w35-proSPC.nii
--reference-image /results/BmANTs/1/Izd2-29-041-w35-He.nii
--transform /results/BmANTs/1/trans1Warp.nii.gz -t
/results/BmANTs/1/trans0GenericAffine.mat
--interpolation Linear
/antsbin/bin/antsApplyTransformsToPoints
--dimensionality 2
--input /results/BmANTs/1/Izd2-29-041-w35-proSPC.csv
--output /results/BmANTs/1/Izd2-29-041-w35-proSPC.csv
--transform [ /results/BmANTs/1/trans0GenericAffine.mat , 1] -t
/results/BmANTs/1/trans1InverseWarp.nii.gz
This looks correct to me, except that you are overwriting the input with
the output, which I would not recommend.
The input points should be in ITK physical space and in the format
"x,y,z,t". Since your data is 2D, the last two dimensions here would be
zero.
If you open your image in ITK-SNAP, you can go to Tools -> Image
Information, which will show you the ITK coordinates of a point in the
image.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#733 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AATyfpSuE_y9u8PQkyVM9_tlP828JH_kks5vVrglgaJpZM4bniLf>
.
--
brian
|
I played with ANTsPy, but unfortunately, it requires cmake 3.11 and the latest version for Unintu is 3.10, so I am not able to run on university computation grid with my restricted access... :( |
Upgrading cmake is super easy, they have a boostrap build or you can download a pre-compiled binary. There's no need to install into the "system" you can just extract and add the directory to your PATH. |
yes, I have notices that the |
|
this is the image and points before transformation |
What are we supposed to conclude from these pictures?
On Tue, Mar 12, 2019 at 3:34 AM Jirka Borovec ***@***.***> wrote:
this is the image and points before transformation
[image: fig1]
<https://user-images.githubusercontent.com/6035284/54182292-6798c780-44a1-11e9-9e71-02a4eafea7ed.jpg>
and after transformation with this commands #733 (comment)
<#733 (comment)>
[image: fig2]
<https://user-images.githubusercontent.com/6035284/54182293-6798c780-44a1-11e9-9edc-73b83111e4e9.jpg>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#733 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AATyfh7ZCZxVapCmVRIC2HBpU6MdQfJFks5vV1iggaJpZM4bniLf>
.
--
brian
|
that using the estimated transformation (direct for image and inverse for points) does not give the same positions on the image after warping compare to the initial state |
It would be easier if you can share the data so we can show you the right
approach or reproduce any issues you have.
On Tue, Mar 12, 2019 at 8:03 AM Jirka Borovec ***@***.***> wrote:
that using the estimated transformation (direct for image and inverse for
points) does not give the same positions on the image after warping compare
to the initial state
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#733 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AATyfmY_BmjcUtqGO_iCecbavu6FTDu9ks5vV5d6gaJpZM4bniLf>
.
--
brian
|
Thank you, here are the two input images and the landmarks - Rat_Kidney_.zip |
I think the problem is the input points are not in ITK physical space coordinates. I did
Then placed two points by hand
Then
which produces
Which looks to be correct |
Yes. This is the most common “point” of confusion.
On Tue, Mar 12, 2019 at 11:37 AM Philip Cook ***@***.***> wrote:
I think the problem is the input points are not in ITK physical space
coordinates.
I did
antsRegistrationSyNQuick.sh -d 2 -f Rat_Kidney_HE.nii -m Rat_Kidney_PanCytokeratin.nii -t s -o panToHE
Then placed two points by hand
[image: image]
<https://user-images.githubusercontent.com/611507/54211999-a2f2c080-44b8-11e9-9e07-658bb96f3285.png>
x,y,z,t
-827,-218,0,0
-316,-482,0,0
Then
antsApplyTransformsToPoints -d 2 -i testPoints.csv -o testPointsHE.csv -t [ panToHE0GenericAffine.mat, 1 ] -t panToHE1InverseWarp.nii.gz
which produces
x,y,z,t
-848.452,-218.052,0,0
-325.176,-520.195,0,0
Which looks to be correct
[image: image]
<https://user-images.githubusercontent.com/611507/54212830-cec27600-44b9-11e9-87d8-450fa44f72f1.png>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#733 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AATyftWNziHouN_9cxvijhyKfOEOoPGDks5vV8nPgaJpZM4bniLf>
.
--
brian
|
so it means that the point's coordinate has to be negative valued and the origin is in the right bottom corner, correct? What coordinates are used ITK or Nifty? |
Ants uses LPS coordinates. The conversion depends on what coordinates you
use.
On Tue, Mar 12, 2019 at 5:26 PM Jirka Borovec ***@***.***> wrote:
so it means that the point's coordinate has to be negative valued and the
origin is in the right bottom corner, correct?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#733 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AATyftTDj-q1XP8KizN5hCF92FZQsRZkks5vWBt4gaJpZM4bniLf>
.
--
brian
|
I am using the same as ImageJ/Fiji where the origin is in the top left corner and going down/right the position grows... The LPS is Left, Posterior, Superior, which is used in DICOM images and by the ITK toolkit. It means that the origin in the coordinate system is in the bottom left corner, right? |
Same as itk. We don’t know about imagej, etc.
Up to user to either use this framework as it is designed or figure out
conversion on their own.
Contributions of conversion tools always welcome.
On Wed, Mar 13, 2019 at 6:26 PM Jirka Borovec ***@***.***> wrote:
I am using the same as ImageJ/Fiji where the origin is in the top left
corner and going down/right the position grows... The LPS
<https://www.slicer.org/wiki/Coordinate_systems> is Left, Posterior,
Superior, which is used in DICOM images and by the ITK toolkit, right?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#733 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AATyfhMH_Ay4-gerNdkUvqTsEjrSuDC-ks5vWXr8gaJpZM4bniLf>
.
--
brian
|
I would recommend using ITK-SNAP as its coordinate system is consistent with ANTs usage. |
@stnava @ntustison @cookpa @gdevenyi may I kindly ask you could recommend me what registration parameters should I used for such large images as for ANHIR challenge, the image size is up to 18kx18k pixels... I have started with the following:
|
Most of our collective experience is outside histological applications. That being said, I would start by downsampling the images to something like 512x512 voxels which would probably give you the ability to optimize the linear transformations and much of the deformable part relatively quickly. Then you can work towards refinement by slowly incorporating the full resolution of your images and adding image registration "stages." One bit of caution is to keep in mind the spacing of the images as I see you're using mm to specify certain parameters and performance is going to depend on what information is (or isn't) in the image headers. |
the images are png/jpeg so there is no information about spacing... |
Right, which is why I advised caution. ITK will automatically read in a spacing value when no information is available, I think 1 mm isotropic. And when you downsample/upsample and do registration, performance will be reflected in the assigned spacing values. |
just passing the |
I think if you replace "mm" with "vox" in the smoothing it will take care of the smoothing part. There is downsampling of the image within the registration, that is what the By downsampling offline, you can test registration parameters much more efficiently. Up to you though. |
Describe the bug
I have some difficulty with transforming points according to the estimated registration transform.
To Reproduce
Steps to reproduce the behaviour:
Expected behavior
I am wondering why the loaded CSV have only NAN as input? What is the supported CSV format?
The text was updated successfully, but these errors were encountered: