New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ACT modifications #311

Closed
wants to merge 9 commits into
base: master
from

Conversation

Projects
None yet
2 participants
@Lestropie
Member

Lestropie commented Aug 3, 2015

Need to make sure this isn't going to break something, kind of a crucial functionality...

  • The original back-tracking algorithm was a bit naive: each time a bad termination was encountered, an integer was incremented, and the streamline was truncated with length according to that integer. This meant that if back-tracking was used to traverse one problematic region, but then another was encountered, the length of truncation would immediately be high. Now, the behaviour is as follows:
    • The length of truncation is 'reset' if the streamline length exceeds that at the previous back-tracking event, i.e. the first problematic area has been passed.
    • If a truncation has occurred, but the streamline has not reached the same maximum length before back-tracking is again invoked, the truncation length is with reference to the maximal streamline length, not the current length.
    • Each length of truncation is repeated for three back-tracking attempts before the length of truncation is increased (trade-off between effectiveness and speed).
  • It is now possible to seed streamlines within sub-cortical grey matter. The streamline must reach white matter for it to be accepted. If tracking bi-directionally, only one of the unidirectional streamline projections may enter the WM; if both attempt to do so, the second is treated as exiting the SGM as per the normal ACT priors.
  • When approaching the edge of the image, iFOD2 will now terminate with EXIT_IMAGE flag if any calibrator path exits the image, rather than requiring that they all exit the image.

Lestropie added some commits Jul 27, 2015

ACT: Modify back-tracking algorithm
Previously, whenever back-tracking was enabled, a simple integer counter was incremented and the streamline was shortened by that length.
This issue with this approach was that if back-tracking was enabled early during propagation, but successful in finding a new streamline trajectory, another back-tracking event further down the trajectory would immediately have a large revert step.
This modification essentially resets the back-tracking length to 1 step whenever a new back-tracking event is encountered, giving the algorithm a better chance of successfully traversing multiple difficult regions.
ACT: Further changes to back-tracking
It is now possible to set the number of times that a back-tracking event must be initiated at (or below) a given streamline length before the length of truncation is increased.
ACT: Change back-tracking variable handling
The virtual method truncate_track() is now responsible for updating or clearing the 'pos' and 'dir' members.

@Lestropie Lestropie self-assigned this Aug 3, 2015

@Lestropie

This comment has been minimized.

Show comment
Hide comment
@Lestropie

Lestropie Aug 3, 2015

Member

Closes #292.
Closes #131.

Member

Lestropie commented Aug 3, 2015

Closes #292.
Closes #131.

Lestropie added some commits Aug 3, 2015

ACT: Change behaviour at image edge
Change really only affects iFOD2.
Previously, all calibrator paths needed to exit the image in order to terminate.
Now, the streamline will be terminated with the EXIT_IMAGE flag if _any_ calibrator path exits the image.
This should prevent streamlines from being erroneously rejected or back-tracked at the inferior edge of the image (though this number appears to have been small anyways).
Closes #103.
tckgen: Two null-distribution algorithms
'Nulldist1' now behaves the same as the old 'Nulldist' algorithm, i.e. Euler integration with random directions.
The new 'Nulldist2' algorithm performs tracking in a similar manner to iFOD2 (i.e. along arcs), but using non-informative FODs; it should therefore be used to capture the null distribution when the iFOD2 tracking algorithm is to be used.
Closes #150.
iFOD2: Minor tweak to calibration
Previously the calibrator assumed a homogeneous delta function field.
This change considers that the FOD amplitude for different paths may change due to both the orientation and the position within the field, by assuming that the voxels adjacent to the calibration position have zero amplitude.
@Lestropie

This comment has been minimized.

Show comment
Hide comment
@Lestropie

Lestropie Sep 22, 2015

Member

@jdtournier This OK with you? Realistically it makes bugger-all difference to the calibration result, just spent a while hunting down issues and thought it was worth having it 'as right as possible'.

Member

Lestropie commented on 531f028 Sep 22, 2015

@jdtournier This OK with you? Realistically it makes bugger-all difference to the calibration result, just spent a while hunting down issues and thought it was worth having it 'as right as possible'.

This comment has been minimized.

Show comment
Hide comment
@jdtournier

jdtournier Sep 22, 2015

Member

Sorry, I'm a bit confused. Does this relate to #354? I'm assuming it doesn't, but then I'm not sure I understand the rationale behind these changes... In what way would this make the calibration 'right'? Is the idea to somehow 'sharpen' the distribution to be used in the calibration, so that the calibrator will suggest increased density of sampling during tracking? If that's the case, I think I can see why this might be a good idea. The situation you'd be using to calibrate is going to be a fairly common occurrence, and would lead to a sharper distribution - especially for large step sizes...

Does this reduce the incidence of calibrator failures...? And does it impact performance? I'd expect increased sampling density would have an impact on speed of execution... But if it helps improve the validity of the sampling, then that's that...

Member

jdtournier replied Sep 22, 2015

Sorry, I'm a bit confused. Does this relate to #354? I'm assuming it doesn't, but then I'm not sure I understand the rationale behind these changes... In what way would this make the calibration 'right'? Is the idea to somehow 'sharpen' the distribution to be used in the calibration, so that the calibrator will suggest increased density of sampling during tracking? If that's the case, I think I can see why this might be a good idea. The situation you'd be using to calibrate is going to be a fairly common occurrence, and would lead to a sharper distribution - especially for large step sizes...

Does this reduce the incidence of calibrator failures...? And does it impact performance? I'd expect increased sampling density would have an impact on speed of execution... But if it helps improve the validity of the sampling, then that's that...

This comment has been minimized.

Show comment
Hide comment
@Lestropie

Lestropie Sep 22, 2015

Member

They both came about from the same experimentation, but not 'related' per se.

The whole theory behind the calibrator is trying to capture the sharpest possible transitions in the data, so that you can estimate an upper bound from limited samples. But previously this was done assuming a delta function at every point along the length of the sampling arc; in reality, different arcs are going to sample different tri-linear data due to traversing different points in space, and this has the potential to increase the differential in amplitudes. By not taking this effect into account, your calibration data is smoother than the sharpest possible transitions in the empirical data, which is what I'm classifying as 'not right'.

For default parameters, number of calibration directions doesn't change, ratio barely changes. Don't think the calibrator failure instances changed much, but that was dominated by the effect targeted in #354 (currently getting ~25% calibrator failures in the SIFT phantom).

Member

Lestropie replied Sep 22, 2015

They both came about from the same experimentation, but not 'related' per se.

The whole theory behind the calibrator is trying to capture the sharpest possible transitions in the data, so that you can estimate an upper bound from limited samples. But previously this was done assuming a delta function at every point along the length of the sampling arc; in reality, different arcs are going to sample different tri-linear data due to traversing different points in space, and this has the potential to increase the differential in amplitudes. By not taking this effect into account, your calibration data is smoother than the sharpest possible transitions in the empirical data, which is what I'm classifying as 'not right'.

For default parameters, number of calibration directions doesn't change, ratio barely changes. Don't think the calibrator failure instances changed much, but that was dominated by the effect targeted in #354 (currently getting ~25% calibrator failures in the SIFT phantom).

This comment has been minimized.

Show comment
Hide comment
@Lestropie

Lestropie Sep 22, 2015

Member

Since we're here: I'm not convinced that applying the standard 0.1 threshold on the calibrator paths is appropriate. It's possible for there to exist a path within the range of possible paths that exceeds this threshold, but none of the calibrator paths do so. This is primarily what I was tinkering with today, but I think I need to address #354 before I can properly assess this effect.

Member

Lestropie replied Sep 22, 2015

Since we're here: I'm not convinced that applying the standard 0.1 threshold on the calibrator paths is appropriate. It's possible for there to exist a path within the range of possible paths that exceeds this threshold, but none of the calibrator paths do so. This is primarily what I was tinkering with today, but I think I need to address #354 before I can properly assess this effect.

This comment has been minimized.

Show comment
Hide comment
@jdtournier

jdtournier Sep 22, 2015

Member

Ok, that's more or less what I'd gathered. Sounds like a good idea to me, go for it.

Also agree that we don't need to apply the cutoff threshold when calibrating. Feel free to go ahead and remove it.

Member

jdtournier replied Sep 22, 2015

Ok, that's more or less what I'd gathered. Sounds like a good idea to me, go for it.

Also agree that we don't need to apply the cutoff threshold when calibrating. Feel free to go ahead and remove it.

This comment has been minimized.

Show comment
Hide comment
@Lestropie

Lestropie Sep 22, 2015

Member

Well you probably want some kind of threshold; if it's an appropriate point at which to terminate, you'd prefer to terminate based on the calibrator rather than draw 1000 further samples before giving up. I was going to just apply the calibration ratio prior to applying the threshold, but that ratio applies to the probabilities, not the FOD amplitudes. And there's no unique path probability that corresponds to the FOD amplitude threshold. So not quite sure what to do about this just yet.

Member

Lestropie replied Sep 22, 2015

Well you probably want some kind of threshold; if it's an appropriate point at which to terminate, you'd prefer to terminate based on the calibrator rather than draw 1000 further samples before giving up. I was going to just apply the calibration ratio prior to applying the threshold, but that ratio applies to the probabilities, not the FOD amplitudes. And there's no unique path probability that corresponds to the FOD amplitude threshold. So not quite sure what to do about this just yet.

@Lestropie

This comment has been minimized.

Show comment
Hide comment
@Lestropie

Lestropie Sep 22, 2015

Member

@jdtournier This OK with you? Realistically it makes bugger-all difference to the calibration result, just spent a while hunting down issues and thought it was worth having it 'as right as possible'.

Member

Lestropie commented on 531f028 Sep 22, 2015

@jdtournier This OK with you? Realistically it makes bugger-all difference to the calibration result, just spent a while hunting down issues and thought it was worth having it 'as right as possible'.

This comment has been minimized.

Show comment
Hide comment
@jdtournier

jdtournier Sep 22, 2015

Member

Sorry, I'm a bit confused. Does this relate to #354? I'm assuming it doesn't, but then I'm not sure I understand the rationale behind these changes... In what way would this make the calibration 'right'? Is the idea to somehow 'sharpen' the distribution to be used in the calibration, so that the calibrator will suggest increased density of sampling during tracking? If that's the case, I think I can see why this might be a good idea. The situation you'd be using to calibrate is going to be a fairly common occurrence, and would lead to a sharper distribution - especially for large step sizes...

Does this reduce the incidence of calibrator failures...? And does it impact performance? I'd expect increased sampling density would have an impact on speed of execution... But if it helps improve the validity of the sampling, then that's that...

Member

jdtournier replied Sep 22, 2015

Sorry, I'm a bit confused. Does this relate to #354? I'm assuming it doesn't, but then I'm not sure I understand the rationale behind these changes... In what way would this make the calibration 'right'? Is the idea to somehow 'sharpen' the distribution to be used in the calibration, so that the calibrator will suggest increased density of sampling during tracking? If that's the case, I think I can see why this might be a good idea. The situation you'd be using to calibrate is going to be a fairly common occurrence, and would lead to a sharper distribution - especially for large step sizes...

Does this reduce the incidence of calibrator failures...? And does it impact performance? I'd expect increased sampling density would have an impact on speed of execution... But if it helps improve the validity of the sampling, then that's that...

This comment has been minimized.

Show comment
Hide comment
@Lestropie

Lestropie Sep 22, 2015

Member

They both came about from the same experimentation, but not 'related' per se.

The whole theory behind the calibrator is trying to capture the sharpest possible transitions in the data, so that you can estimate an upper bound from limited samples. But previously this was done assuming a delta function at every point along the length of the sampling arc; in reality, different arcs are going to sample different tri-linear data due to traversing different points in space, and this has the potential to increase the differential in amplitudes. By not taking this effect into account, your calibration data is smoother than the sharpest possible transitions in the empirical data, which is what I'm classifying as 'not right'.

For default parameters, number of calibration directions doesn't change, ratio barely changes. Don't think the calibrator failure instances changed much, but that was dominated by the effect targeted in #354 (currently getting ~25% calibrator failures in the SIFT phantom).

Member

Lestropie replied Sep 22, 2015

They both came about from the same experimentation, but not 'related' per se.

The whole theory behind the calibrator is trying to capture the sharpest possible transitions in the data, so that you can estimate an upper bound from limited samples. But previously this was done assuming a delta function at every point along the length of the sampling arc; in reality, different arcs are going to sample different tri-linear data due to traversing different points in space, and this has the potential to increase the differential in amplitudes. By not taking this effect into account, your calibration data is smoother than the sharpest possible transitions in the empirical data, which is what I'm classifying as 'not right'.

For default parameters, number of calibration directions doesn't change, ratio barely changes. Don't think the calibrator failure instances changed much, but that was dominated by the effect targeted in #354 (currently getting ~25% calibrator failures in the SIFT phantom).

This comment has been minimized.

Show comment
Hide comment
@Lestropie

Lestropie Sep 22, 2015

Member

Since we're here: I'm not convinced that applying the standard 0.1 threshold on the calibrator paths is appropriate. It's possible for there to exist a path within the range of possible paths that exceeds this threshold, but none of the calibrator paths do so. This is primarily what I was tinkering with today, but I think I need to address #354 before I can properly assess this effect.

Member

Lestropie replied Sep 22, 2015

Since we're here: I'm not convinced that applying the standard 0.1 threshold on the calibrator paths is appropriate. It's possible for there to exist a path within the range of possible paths that exceeds this threshold, but none of the calibrator paths do so. This is primarily what I was tinkering with today, but I think I need to address #354 before I can properly assess this effect.

This comment has been minimized.

Show comment
Hide comment
@jdtournier

jdtournier Sep 22, 2015

Member

Ok, that's more or less what I'd gathered. Sounds like a good idea to me, go for it.

Also agree that we don't need to apply the cutoff threshold when calibrating. Feel free to go ahead and remove it.

Member

jdtournier replied Sep 22, 2015

Ok, that's more or less what I'd gathered. Sounds like a good idea to me, go for it.

Also agree that we don't need to apply the cutoff threshold when calibrating. Feel free to go ahead and remove it.

This comment has been minimized.

Show comment
Hide comment
@Lestropie

Lestropie Sep 22, 2015

Member

Well you probably want some kind of threshold; if it's an appropriate point at which to terminate, you'd prefer to terminate based on the calibrator rather than draw 1000 further samples before giving up. I was going to just apply the calibration ratio prior to applying the threshold, but that ratio applies to the probabilities, not the FOD amplitudes. And there's no unique path probability that corresponds to the FOD amplitude threshold. So not quite sure what to do about this just yet.

Member

Lestropie replied Sep 22, 2015

Well you probably want some kind of threshold; if it's an appropriate point at which to terminate, you'd prefer to terminate based on the calibrator rather than draw 1000 further samples before giving up. I was going to just apply the calibration ratio prior to applying the threshold, but that ratio applies to the probabilities, not the FOD amplitudes. And there's no unique path probability that corresponds to the FOD amplitude threshold. So not quite sure what to do about this just yet.

tckgen: Allow ignoring streamline count limits
- Allow the user to set -number 0 or -maxnum 0, and the program will ignore the relevant criterion.
@Lestropie

This comment has been minimized.

Show comment
Hide comment
@Lestropie

Lestropie Dec 5, 2015

Member

These changes have instead been merged into the updated_syntax branch (#225). Figure we might as well push all major changes in a single go. Still needs some testing to make sure it all works as expected though, since this was all written pre-updated_syntax.

Member

Lestropie commented Dec 5, 2015

These changes have instead been merged into the updated_syntax branch (#225). Figure we might as well push all major changes in a single go. Still needs some testing to make sure it all works as expected though, since this was all written pre-updated_syntax.

@Lestropie Lestropie closed this Dec 5, 2015

@Lestropie Lestropie deleted the act_modifications branch Mar 31, 2016

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment