You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I know it's bold claim, but I think there is an issue with the autograd functionality in the forward nufft operation. In order to reproduce the problem I modified the cell [8] in the Basic Example in the following way:
The first time I executed the cell, the output is True, as expected. But every subsequent time, it prints False. I am not an expert in the torch autograd functionality but I would expect the output to have the same requires_grad value as the the input.
I also did the same test with the adjoint nufft (cell [10] modified):
Hello @ajlok3, should be fixed by PR #23. This was an incredibly weird bug that was caused by me applying unsqueeze to the scaling coefficients. The unsqueeze actually isn't necessary, and the bug seems to have been fixed by removing it.
I've just released an update - I think you can get it by pip install --upgrade torchkbnufft. Let me know if you still have the issue.
Hi,
I know it's bold claim, but I think there is an issue with the autograd functionality in the forward nufft operation. In order to reproduce the problem I modified the cell [8] in the Basic Example in the following way:
The first time I executed the cell, the output is True, as expected. But every subsequent time, it prints False. I am not an expert in the torch autograd functionality but I would expect the output to have the same requires_grad value as the the input.
I also did the same test with the adjoint nufft (cell [10] modified):
... and the output was always True. The same holds for the examples in v0.3.4.
Could it be that the nufft_ob's internal state changes with every call?
Thanks a lot!
The text was updated successfully, but these errors were encountered: