You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I run DAN on digits_dann_lightn and action_dann_lightn, MMD loss T_mmd has some values below 0. It will cause T_total_loss below 0 because T_total_loss = T_task_loss + 1 * T_mmd. Is it correct?
Run python main.py --cfg ./configs/MN2UP-DAN.yaml --gpus 1.
Check the loss by printing them or tensorboard.
** Stack trace/error message **
This is my output with repeats=10, epoch=100, init_epoch=20.
The T_mmd varies, so does T_total_loss. I think the loss should be above 0.
Expected Behaviour
The loss should be above 0 like CDAN.
There are some useful links. ADA code Xlearn code
I checked these codes and ours is almost similar to them. Thus, I am not sure whether this loss output is right.
馃悰 Bug
When I run DAN on
digits_dann_lightn
andaction_dann_lightn
, MMD lossT_mmd
has some values below 0. It will causeT_total_loss
below 0 becauseT_total_loss = T_task_loss + 1 * T_mmd
. Is it correct?To reproduce
Steps to reproduce the behavior:
In
digits_dann_lightn
,fast_dev_run=False
andlogger=True
inmain.py
.python main.py --cfg ./configs/MN2UP-DAN.yaml --gpus 1
.tensorboard
.** Stack trace/error message **
This is my output with
repeats=10
,epoch=100
,init_epoch=20
.The
T_mmd
varies, so doesT_total_loss
. I think the loss should be above 0.Expected Behaviour
The loss should be above 0 like CDAN.
There are some useful links.
ADA code
Xlearn code
I checked these codes and ours is almost similar to them. Thus, I am not sure whether this loss output is right.
Environment
The text was updated successfully, but these errors were encountered: