BUG: preventing crash due to overflow for large int vectors by circumventing VNL dot_product #2
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
ITK-Snap crashed when opening large datasets. It seems this was caused by an integer overflow when calculating a pointer index. The function used dot_product from VNL, which does not check for overflows (http://itk.org/gitweb?p=ITK.git;a=blob;f=Modules/ThirdParty/VNL/src/vxl/core/vnl/vnl_sse.h;h=04f930ef05bb49036e1fa69d6fe9ae9a697e2d45;hb=HEAD#l189). Doing the calculation avoiding VNL allows to use a simple cast to size_t before multiplication.
This solved the issue for me. I can now load a grey image of about 10GB plus a 40GB segmentation into ITK-Snap, modify it (even in gWS mode) and then render the segmentation without any problems.
Perhaps using size_t for the vectors used for index calculation (instead of Vector3i) might be a cleaner approach. I'm not sure, but the casting solution might slow down ITK-Snap and is only needed for big datasets. Perhaps a pre-check for overflow could be introduced and dot_product used as long as possible.
I'm not sure if it is a good idea to encourage users to use ITK-Snap for such big datasets. If they shouldn't, ITK-Snap should at least through an exception that the dataset to be loaded is too big instead of crash.