New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
for GNN #24
Comments
I know very little about the graph neural network but I think maybe you can try to define the encoder in HPS as some graph layers to extract some shared features among tasks. |
I see. How about the attention layer on top of graph, I shall define the encoder after that, right? Do you have an MTL example (either code or reference) for text classification? |
We have no such example using GNN in LibMTL now. But there are many works about multi-task text classification and I think it is easy to find them on Google or Github. You can refer to how a multi-task GNN is built. |
yes, thank you for your suggestion |
Closed as no further updates. |
@aozorahime Hi I have been using PyTorch geometric and added my own code to make it work for multi-task learning. Let me know if you're interested. @Baijiong-Lin Cool repo! I am interested in using loss weights such as RLW for my Loss. |
@davodogster Maybe you can try it, I am not sure whether it can work in such a GNN case. |
@davodogster Hi, cool you made it! Have you published it in your repo? |
@aozorahime No Sorry, I haven't published anything yet. How urgent is it?
This gets passed to the PyG DatsetClass Dataset gets passed to the dataloader test_loader = DataLoader(DBH_DS_test, batch_size=TRAIN_BS, shuffle = False, num_workers=25) |
@Baijiong-Lin Do you know what variable and shape is "task_num" in the example above? How do I modify it for three-task learning? I think it might be able to work for my use case but I will need some tips :) |
@davodogster Cool! so, the point is you use three different losses for the task right? MTL is for my thesis currently. Can we discuss this further later? or is it okay to discuss this here? :D |
@aozorahime yup you use three losses. It can be a mix of classification and regression losses depending on the tasks/params. Then you either sum the losses or use a technique like weighting the losses / GradNorm etc. OR you can scale all features if regression, to be on the same scale e.g. mean 0, SD 1. Or minMax = 0-100. then I think you dont need to weight the loss. https://github.com/ywatanabe1989/custom_losses_pytorch/blob/master/multi_task_loss.py Yes we can discuss it more some time if needed - March is a really busy month for me this year but I can try to fit it in some time! :) |
Hi, I know it's kinda OOT but I am curious whether I can apply multi-task learning to graph neural network. What I learn from HPS, we shall share the encoder/decoder across the layer. I am curious should I create an encoder on top of the graph layer? Kinda stuck in this experiment, any suggestion would be helpful. Thanks
The text was updated successfully, but these errors were encountered: