Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Multi-task Learning] Add GSgnnMultiTaskSharedEncoderModel #855

Merged
merged 10 commits into from
May 28, 2024

Conversation

classicsong
Copy link
Contributor

Issue #, if available:
#789

Description of changes:
GSgnnMultiTaskSharedEncoderModel allows multiple tasks to share the same GNN encoder but have separate decoders for each task.

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

@classicsong classicsong added 0.3 ready able to trigger the CI labels May 23, 2024
python/graphstorm/dataloading/dataloading.py Outdated Show resolved Hide resolved
python/graphstorm/dataloading/dataloading.py Outdated Show resolved Hide resolved
python/graphstorm/model/multitask_gnn.py Outdated Show resolved Hide resolved
python/graphstorm/model/multitask_gnn.py Show resolved Hide resolved
python/graphstorm/model/multitask_gnn.py Outdated Show resolved Hide resolved
tests/unit-tests/test_gnn.py Outdated Show resolved Hide resolved
Copy link
Contributor

@zhjwy9343 zhjwy9343 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@classicsong classicsong merged commit b5add94 into awslabs:multi-task May 28, 2024
6 checks passed
@classicsong classicsong deleted the multi-task-model branch May 28, 2024 20:31
classicsong added a commit that referenced this pull request May 31, 2024
*Issue #, if available:*
 #789

*Description of changes:*
Add support for multi-task learning. Users can define multiple tasks in
the same training loop. A task can be a node classification, node
regression, edge classification, edge regression or link prediction
task. For each node classification or node regression task, it should be
defined on a single node type with one label field. But users can define
multiple node classification or regression tasks on the same node type.
For each edge classification or node regression task, it should be
defined on a single edge type with one label field. But users can define
multiple edge classification or regression tasks on the same edge type.
For link prediction, users can define prediction targets on multiple
edge types.

### Graph construction
Update GraphStorm input config parsing to support multi-task learning.
Allow user to specify multiple training tasks for a training job through
yaml file. By providing the `multi_task_learning` configurations in the
yaml file, users can define multiple training tasks. The following
config defines two training tasks, one for node classification and one
for edge classification.

```
---
version: 1.0
gsf:
  basic:
    ...
  ...
  multi_task_learning:
    - node_classification:
      target_ntype: "movie"
      label_field: "label"
      mask_fields:
        - "train_mask_field_nc"
        - "val_mask_field_nc"
        - "test_mask_field_nc"
      task_weight: 1.0
    - edge_classification:
      target_etype:
        - "user,rating,movie"
      label_field: "rate"
      mask_fields:
        - "train_mask_field_ec"
        - "val_mask_field_ec"
        - "test_mask_field_ec"
      task_weight: 0.5 # weight of the task
```
Task specific hyperparameters in multi-task learning are same as thoses
in single task learning, except that two new configs are required, i.e.,
mask_fields and task_weight. The mask_fields provides the training,
validation and test masks for the task and the task_weight gives its
loss weight.


### DataLoader for multi-task learning
Add GSgnnMultiTaskDataLoader to support multi-task learning. 

When initializing a GSgnnMultiTaskDataLoader, users need to provide two
inputs: 1) a list of config.TaskInfo objects recording the information
of each task and 2) a list of dataloaders corresponding to each training
task.

During training for each iteration, GSgnnMultiTaskDataLoader will
iteratively call each task-dataloader to generate a mini-batch and
finally return a list of mini-batches to the trainer.

The length of the dataloader (number of batches for an epoch) is
determined by the largest task in the GSgnnMultiTaskDataLoader.
#834 

### Evaluator for multi-task learning

GSgnnMultiTaskEvaluator accepts a set of Evaluators, in the format of
dict ({task_id: Evaluator, ...}) as input to initialize the multi-task
evaluator.

When doing evaluation, it accepts three arguements val_results,
test_results and total_iters. The val_results and test_results will be
dicts in the format of {task_id_0: reslut, task_id_1: result}. The
GSgnnMultiTaskEvaluator will call task specify evaluators for each task
to compute the evaluation scores.
#837 

### Refactor graphstorm.model for multi-task learning

As multi-task learning trainer will invoke edge_mini_batch_predict,
lp_mini_batch_predict and node_mini_batch_predict when conducting
evaluation or testing, refactor the code to allow the functions to work
with different decoders.
#843 


### Add GSgnnMultiTaskSharedEncoderModel
GSgnnMultiTaskSharedEncoderModel allows multiple tasks to share the same
GNN encoder but have separate decoders for each task.
#855 

### Add Multi-task entrypoint
#849 


By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.

---------

Co-authored-by: Xiang Song <xiangsx@amazon.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
0.3 ready able to trigger the CI
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants