You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am adapting from light dataset to a darker one. When I am running test on the source domain, I found that the performance is highly degraded if performing model.eval(). But this doesn't appear on the target domain. It is quiet wired. And my pytorch version is 1.0
The text was updated successfully, but these errors were encountered:
Seems that the gamma and beta in batchnorm are still updated (we also found this before), but we cannot control it. For the degraded performance on source, it is natural compared to the model without target domain alignment. However, it should not produce something super bad as there is still a supervised loss on source.
No, the result is super bad. Maybe this is due to the large domain gap. The last iter we will train model on target domain. So the batchnorm parameters(running_mean, running_var) will adapt to the target domain at the same time. When we are testing on the source domain, batchnorm parameters don't match with this domain. As a result, performance will drop.
I am adapting from light dataset to a darker one. When I am running test on the source domain, I found that the performance is highly degraded if performing model.eval(). But this doesn't appear on the target domain. It is quiet wired. And my pytorch version is 1.0
The text was updated successfully, but these errors were encountered: