You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The entire graph should not be available for any task ID during training.
It may include inter-task edges between different task IDs already existing in the dataset.
Therefore, this code seems to take an extra advantage in the class incremental setting without inter-edge connections in pipeline_class_IL_no_inter_edge_minibatch.
Can you please clarify our concern?
The text was updated successfully, but these errors were encountered:
# to facilitate the methods like ER-GNN to only retrieve nodes
In the line above, the remove_edge is used to remove the edges from the retrived subgraphs, therefore the inter-task edges will not participate in methods like ergnn.
Besides, it is true that the entire graph should not be available during training. The code you quoted only retrived the stored node ids, which means only these buffered node ids are available for memory replay. Therefore, although the model retrives the nodes from the datasets everytime for memory replay, since it only gets the previously stored ids, it is same as storing the nodes and does not access the entire datasets anymore.
In the
observe_class_IL_batch
function of ergnn_model.py while sampling the subgraph corresponding to task idt > 0
the code seems to sample the subgraph from the entire dataset. However,Therefore, this code seems to take an extra advantage in the class incremental setting without inter-edge connections in
pipeline_class_IL_no_inter_edge_minibatch
.Can you please clarify our concern?
The text was updated successfully, but these errors were encountered: