You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
. Implement it in the EMAU module may be more good-looking. But as the \mu has to be averaged on the whole batch, implementing it in the module needs the 'reduce' operation as in SyncBN. So I just write the line in the 'train.py', where the \mu from all GPUs are already together here.
.
To be honest, there lacks deep exploration of what happens inside the EMA. So EMANet is just a naive exploration on the EM + Attention mechanism. So, I just look forward for more deep analysis by dear followers.
And how to consider the gradient backpropgation in your implement?
The text was updated successfully, but these errors were encountered: