You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The term task_loss has the loss of the current batch in training, then why do we add sum(self.task_.losses) here?
Could you point me where is (self.task_.losses) updated or calculated?
Hi @sreenivasaupadhyaya,
self.task_.losses contains the losses computed in the layers of the task_ network, if any. For example, if you use the kernel_regularizer argument of the layer (also called weight decay).
Hi @antoinedemathelin ,
The term task_loss has the loss of the current batch in training, then why do we add sum(self.task_.losses) here?
Could you point me where is (self.task_.losses) updated or calculated?
Thanks in advance.
https://github.com/adapt-python/adapt/blob/ce38413733751f3e108e6bc274084574eebf7a33/adapt/feature_based/_dann.py#L153C1-L155C50
"
task_loss += sum(self.task_.losses)
disc_loss += sum(self.discriminator_.losses)
enc_loss += sum(self.encoder_.losses)
"
The text was updated successfully, but these errors were encountered: