Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible bug in meanteacher algorithm #102

Closed
5v3D3 opened this issue May 3, 2023 · 4 comments
Closed

Possible bug in meanteacher algorithm #102

5v3D3 opened this issue May 3, 2023 · 4 comments
Labels
bug Something isn't working

Comments

@5v3D3
Copy link

5v3D3 commented May 3, 2023

In meanteacher train_step (line 55), logits_x_ulb_s gets assigned the logits of outs_x_ulb_w.
So the consistency loss is calculated only with logits_x_ulb_w, which results in 0 consistency loss in every iteration.
outs_x_ulb_s is never used.

outs_x_ulb_s = self.model(x_ulb_s)
logits_x_ulb_s = outs_x_ulb_w['logits']
feats_x_ulb_s = outs_x_ulb_w['feat']

this should be:

outs_x_ulb_s = self.model(x_ulb_s) 
logits_x_ulb_s = outs_x_ulb_s['logits'] 
feats_x_ulb_s = outs_x_ulb_s['feat']
@Hhhhhhao
Copy link
Collaborator

This might be caused by a typo. Have you run the modified code? Does the result vary from the reported result?

@5v3D3
Copy link
Author

5v3D3 commented May 15, 2023

Sorry but i haven't run the modified code on any of the benchmarks, since right now I am only working locally with my own dataset. With the modified code I get an unsup_loss that is greater than 0 though. So I can only suspect that it would perform better.
But I had a look at your results and meanteacher performance looked a bit odd as it is quite similar to supervised (Both in classic_cv and usb_cv), while most other methods perform better . Which would make sense, if only the supervised loss is used in optimization.

@Hhhhhhao
Copy link
Collaborator

Sorry but i haven't run the modified code on any of the benchmarks, since right now I am only working locally with my own dataset. With the modified code I get an unsup_loss that is greater than 0 though. So I can only suspect that it would perform better. But I had a look at your results and meanteacher performance looked a bit odd as it is quite similar to supervised (Both in classic_cv and usb_cv), while most other methods perform better . Which would make sense, if only the supervised loss is used in optimization.

Will check this in next update.

@Hhhhhhao Hhhhhhao added the bug Something isn't working label May 17, 2023
Hhhhhhao added a commit that referenced this issue Jul 19, 2023
@Hhhhhhao
Copy link
Collaborator

Fixed in PR #135

Hhhhhhao added a commit that referenced this issue Jul 20, 2023
* [Update] resolve requirements.txt conflicts

* [Fix] Fix mean teacher bug in #102

* [Fix] Fix DebiasPL bug

* [Fix] Fix potential sample data bug in #119

* [Update] Add auto issue/pr closer

* [Update] Update requirements.txt

* [Fix] Fix bug in #74

* [Fix] Fix amp lighting bug in #123

* [Fix] Fix notebook bugs

* [Update] release semilearn 0.3.1
@5v3D3 5v3D3 closed this as completed Jul 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants