Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prevent tckmap hang on Windows #371

Merged
merged 3 commits into from
Oct 5, 2015
Merged

Prevent tckmap hang on Windows #371

merged 3 commits into from
Oct 5, 2015

Conversation

Lestropie
Copy link
Member

Make use of Thread::batch() to reduce the load on multi-threading queues, which can hang in heavy processing on Windows.
Related to #255.

Make use of Thread::batch() to reduce the load on multi-threading queues, which can hang in heavy processing on Windows.
Related to #255.
For non-dynamic-seeding tracking, the default multi-threading batch size of 128 was introducing too much jitter into the progressbar update.
@jdtournier
Copy link
Member

noticed the checks for tckgen didn't pass the first time round, I'm assuming this is due to the stochastic nature of the beast... I got TravisCI to re-run the tests, and it passed fine second time round. But maybe we want to look into modifying the tests to at least reduce the probability of failure...?

No big deal for this merge though, let's hit the button...

jdtournier added a commit that referenced this pull request Oct 5, 2015
@jdtournier jdtournier merged commit 14c576b into master Oct 5, 2015
@jdtournier jdtournier deleted the tckgen_thread_fix branch October 5, 2015 12:35
@Lestropie
Copy link
Member Author

Yeah still need to tweak them slightly. The initial thread batching made the dynamic seeding test fail because it was introducing too much of a delay in receiving tracks and updating the seed probabilities :-/ I'll up the numbers on a couple of them and see how we go.

But I reckon that if my TDI-based testing still fails on a regular basis, testing Hausdorff distances with probabilistic tracking is doomed to failure...

@jdtournier
Copy link
Member

But I reckon that if my TDI-based testing still fails on a regular basis, testing Hausdorff distances with probabilistic tracking is doomed to failure...

May very well be true... The reason I'd be hopeful though is that the idea would be to generate a large set of test data (say 10k streamlines), and during testing only generate maybe 250 streamlines. The chances of generating a streamline that doesn't have a sufficiently close match in the testing data should be minimal. And we would allow some small number of failures to match anyway...

But this is all speculation. I need to give this a crack - just need to find the time...

@Lestropie
Copy link
Member Author

But once you've allowed for enough error to not fail spuriously, your tests are probably only going to be sensitive to outright tracking failures and not small regressions; and the existing tracking tests on the SIFT phantom should catch those. The whole idea just gives me nightmarish flashbacks of trying to do streamline clustering... and look how that panned out. :-P

Alternatively if the main goal is to test tracking on real data rather than just a phantom, maybe specify a seed point, do your tracking, generate the TDI, and look for differences in that?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants