You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
On master currently, when using the PatchInferer class with an AvgMerger (the default Merger class), and a filter_fn, the counts will be zero everywhere the filter_fn filters a region. Then, when the AvgMerger.finalize() is called, the self.values attr of AvgMerger is in-place divided by the self.counts tensor. This is an issue, since the self.counts tensor is initialised to zero, and div by zero causes NaNs. So, everywhere that a filter_fn successfully filters a region, we get NaN outputs.
A quick inplace assignment to counts (to set counts to 1, for example), will set all of these values to zero after this inplace division, but if the output is supposed to be real valued/continuous, it might be better to inplace overwrite these values to be the smallest value possible (using torch.finfo(self.values.dtype).min or something similar). Monkey patching the outputs from an Inferer isn't the best situation, since a network can produce NaNs due to weights exploding or overflow during training, and masking this with by overwriting NaNs to zero would merely obfuscate that problem.
The text was updated successfully, but these errors were encountered:
Hi @nicholas-greig, sorry for the later response. After taking a look at your code, I guess the problem is that you set the filter_fn in the Splitter which is used to filter patches. If you set it to None, then it will works as your expected.
Describe the bug
On master currently, when using the PatchInferer class with an AvgMerger (the default Merger class), and a filter_fn, the counts will be zero everywhere the filter_fn filters a region. Then, when the AvgMerger.finalize() is called, the self.values attr of AvgMerger is in-place divided by the self.counts tensor. This is an issue, since the self.counts tensor is initialised to zero, and div by zero causes NaNs. So, everywhere that a filter_fn successfully filters a region, we get NaN outputs.
A quick inplace assignment to counts (to set counts to 1, for example), will set all of these values to zero after this inplace division, but if the output is supposed to be real valued/continuous, it might be better to inplace overwrite these values to be the smallest value possible (using
torch.finfo(self.values.dtype).min
or something similar). Monkey patching the outputs from an Inferer isn't the best situation, since a network can produce NaNs due to weights exploding or overflow during training, and masking this with by overwriting NaNs to zero would merely obfuscate that problem.The text was updated successfully, but these errors were encountered: