Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unsupervised data is only used to improve the performance of the Encoder part? #51

Closed
wang-hairui opened this issue Dec 9, 2021 · 1 comment

Comments

@wang-hairui
Copy link

Hi Yassine,

I have noticed that you answer the question (#43) that the unsupervised loss is not back-propagated through the main decoder. You used .detach() achieve this function.

From my understanding, the main decoder is not affected by the unsupervised loss. In other words, this means only the Encoder part benefited from the training process? Because, in the process of inference, we only need to use the main decoder and the Aux-Decoder part will not influence the performance of test data.

Thanks and regards.

@yassouali
Copy link
Owner

Hi @wang-hairui .

Yes that it is extactly correct, only the encoder is trained with the unsupervised loss, and the main decoder which is used in the testing phase is only trained with labeled examples.

Also note that the decoder is used mainly for a sort of learned convext interpolation, so it makes sense to only learn such an interpolation on the clean features while training the encoder in a semi supervised manner since the mojority of modeling is upon it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants