Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The Number of Forward Pass #3

Closed
kinredon opened this issue Apr 23, 2021 · 4 comments
Closed

The Number of Forward Pass #3

kinredon opened this issue Apr 23, 2021 · 4 comments

Comments

@kinredon
Copy link

As the paper said, TNET needs 2× the inference time plus 1× the gradient time per test point, but I found there is only one forward and one gradient update in the code

tent/tent.py

Line 49 in 03ac55c

def forward_and_adapt(x, model, optimizer):

Is the right order: forward, backward, and forward?

@AI678
Copy link

AI678 commented Apr 29, 2021

It seems that the performance is insenstitive to this extra forward pass

@kinredon
Copy link
Author

In my view, the model BN layer parameters should be updated first using entropy minimization, and then making predictions will achieve better performance. However, this implement utilizing the output of the first forward makes me confused.

@shelhamer
Copy link
Collaborator

shelhamer commented Apr 30, 2021

Please see the published edition of the paper at ICLR'21, where we have updated the method regarding the number of forward passes:

image

In further experiments we found that the results are insensitive to re-forwarding after the update. In practice, tent often only requires a few updates to adapt to the shifts in our experiments, and so repeating inference is not necessary. The update on the last batch still improves prediction on the next batch. Note that this shows the adaptation learned by tent generalizes across target points, as it makes the prediction before taking the gradient, and so its improvement is not specific to each test batch (see this review comment for more discussion).

Is the right order: forward, backward, and forward?

If you want to include the final forward, to have the most-up-to-date predictions with respect to entropy minimization, then you can simply add outputs = self.model(x) after the forward and adapt loop:
https://github.com/DequanWang/tent/blob/master/tent.py#L30-L31

Thank you for your question about adaptation with and without repeating inference!

@Jo-wang
Copy link

Jo-wang commented Mar 17, 2022

Thank you! This helps me a lot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants