Skip to content

rangwani-harsh/PC_Grad_Pytorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 

Repository files navigation

Basic Pytorch implementation of the PCGrad

This is a basic implementation of the PC_GRAD loss suggested in paper Gradient Surgery for Multi-Task Learning Please see here for the official implementation and citation info PC_Grad.

Note:

  1. The loss function converges for me on a different project.
  2. Please don't forget to call the optimizer.step() after calculating gradients from the loss function.

Usage:


output = net(input)
loss1 = criterion_one(output[0], labels[0])
loss2 = criterion_two(output[1], labels[1])

# Optimizer 
PCGrad_loss([loss1, loss2], [optimizer], [net], device):

optimizer.step()

Reference

Please cite as:

@article{yu2020gradient,
  title={Gradient surgery for multi-task learning},
  author={Yu, Tianhe and Kumar, Saurabh and Gupta, Abhishek and Levine, Sergey and Hausman, Karol and Finn, Chelsea},
  journal={arXiv preprint arXiv:2001.06782},
  year={2020}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages