New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
bugs in minibatch trainning #131
Comments
Currently, many models do not support mini-batch training, we are now trying to fix this. You may refer to RGCN.py to support mini-batch. However, models like SimpleHGN, HGT, and HetSANN may have more trouble as these models need dgl.to_homogeneous. As far as I know, this API has bugs when using mini-batch and we are reporting this to DGL Team. |
Thank you for your reply. Finally, I solved the minibatch training problem in the context of my scenario, in short, I use But I find a very strange little problem, the process does not shut down properly. To Producefrom openhgnn.config import Config
config = Config(file_path="./model/config.ini", model="SimpleHGN", dataset="imdb4MAGNN", task="node_classification",gpu=2)
print(config) The console output is
But the program does not shut down. File I still use the following code, It can end normally. import configparser
import numpy as np
import torch as th
import sys
print(111) |
馃悰 Bug
To Reproduce
error occurred in
_mini_train_step
function intrainerflow/node_classification.py
when usemini_batch_flag
innode_classification
task andSimpleHGN
modelExpected behavior
Minibatch training on a large heterograph
Environment
Additional context
MultiLayerFullNeighborSampler
blocks
is a list (line 164) and the expected input in the forward function of the model (e.g. SimpleHGN) is a hg(line 159)The text was updated successfully, but these errors were encountered: