Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Understanding this operation #120

Closed
sreenivasaupadhyaya opened this issue Jan 11, 2024 · 2 comments
Closed

Understanding this operation #120

sreenivasaupadhyaya opened this issue Jan 11, 2024 · 2 comments

Comments

@sreenivasaupadhyaya
Copy link

sreenivasaupadhyaya commented Jan 11, 2024

Hi @antoinedemathelin ,

The term task_loss has the loss of the current batch in training, then why do we add sum(self.task_.losses) here?
Could you point me where is (self.task_.losses) updated or calculated?

Thanks in advance.

https://github.com/adapt-python/adapt/blob/ce38413733751f3e108e6bc274084574eebf7a33/adapt/feature_based/_dann.py#L153C1-L155C50

"
task_loss += sum(self.task_.losses)
disc_loss += sum(self.discriminator_.losses)
enc_loss += sum(self.encoder_.losses)
"

@antoinedemathelin
Copy link
Collaborator

Hi @sreenivasaupadhyaya,
self.task_.losses contains the losses computed in the layers of the task_ network, if any. For example, if you use the kernel_regularizer argument of the layer (also called weight decay).

It was done in the default train_step of Tesorflow. In new versions, self.losses is given as the regularization_losses argument in the compiled_loss function (https://github.com/keras-team/keras/blob/601488fd4c1468ae7872e132e0f1c9843df54182/keras/engine/training.py#L1209), but the behavior is similar.

@sreenivasaupadhyaya
Copy link
Author

Thanks @antoinedemathelin

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants