Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Could you verify the implementation? Acc = 11% after domain adaptation #27

Open
lindagaw opened this issue Jun 6, 2021 · 5 comments
Open

Comments

@lindagaw
Copy link

lindagaw commented Jun 6, 2021

=== Evaluating classifier for encoded target domain ===

only source <<<
Avg Loss = 14.961788177490234, Avg Accuracy = 56.140000%
source and target <<<
Avg Loss = 8366.6220703125, Avg Accuracy = 11.350000%

I got accuracy = 11 after domain adaptation.

@zzzpc
Copy link

zzzpc commented Aug 25, 2021

I set d_learning_rate = 1e-3 and c_learning_rate = 1e-5 can get a good result.

source only <<<
Avg Loss = 1980.854672080592, Avg Accuracy = 75.000000%
domain adaption <<<
Avg Loss = 97.09941973184284, Avg Accuracy = 92.903227%

@Hcshenziyang
Copy link

I got accuracy = 8 after domain adaptation……
image

@ghost
Copy link

ghost commented Mar 15, 2022

Did you solved this issue ? I'am also facing low DA accurcay

@yuhui-zh15
Copy link

See #29

@mashaan14
Copy link

I think what causes the low adaptation accuracy is that the class labels are swapped by the target encoder. This makes sense because it is an unsupervised task and the target encoder didn't see the class labels.

I've used this code on 2D data:
https://github.com/mashaan14/ADDA-toy

You can see in the attached image that the target encoder separates the classes well. But the class labels were swapped.

Testing target data using target encoder

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants