You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In order to make my code clean and easy to read, I tried to reimplement covpool, sqrtm and triuvec with native PyTorch operators as a simple plain python function, as shown in #7.
After ensuring the forward and backward results are equivalent between my auto backward version (with autograd engine) and your manual backward version (with autograd.Function), I tested their speed and surprisingly found my auto backward version slower.
Have you compared these two different approaches before? Do you have any idea on why the manual backward implementation is even faster than PyTorch 's autograd engine?
The text was updated successfully, but these errors were encountered:
@WarBean I also noticed this phenomenon before, but I did not test it massively to figure out why manual bp implementation is faster. Actually, we provide the bp code to ensure the backward can work correctly, due to earlier version of pytorch (<0.4) obtain a wrong result. So I cautiously thought the autograd engine may not perform perfectly, and many "extra operations" are needed to maintain better generality.
In order to make my code clean and easy to read, I tried to reimplement covpool, sqrtm and triuvec with native PyTorch operators as a simple plain python function, as shown in #7.
After ensuring the forward and backward results are equivalent between my auto backward version (with autograd engine) and your manual backward version (with autograd.Function), I tested their speed and surprisingly found my auto backward version slower.
Have you compared these two different approaches before? Do you have any idea on why the manual backward implementation is even faster than PyTorch 's autograd engine?
The text was updated successfully, but these errors were encountered: