You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'd like to ask your help about molopt during running pretrain.py
I have successfully done all of example in molopt with data/train.txt , data/vocab.txt, data/train.logP-SA.
However RuntimeError has occurred with my own training dataset , vocab generated with python ../jtnn/mol_tree.py < my_dataset.txt and my own logP property file.
It seems to be wrong dimension during node aggregation.
What's your opinion about this issue ?
Best Regards, Minkyu Ha
( environment is same with you. python 2.7, cuda 8.0, pytorch 0.3.1)
Model #Params: 4271K
Traceback (most recent call last):
File "pretrain.py", line 69, in
loss, kl_div, wacc, tacc, sacc, dacc, pacc = model(batch, beta=0)
File "/home/minkyuha/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 357, in call
result = self.forward(*input, **kwargs)
File "/home/minkyuha/new-jtnn/icml18-jtnn/jtnn/jtprop_vae.py", line 76, in forward
tree_mess, tree_vec, mol_vec = self.encode(mol_batch)
File "/home/minkyuha/new-jtnn/icml18-jtnn/jtnn/jtprop_vae.py", line 57, in encode
tree_mess,tree_vec = self.jtnn(root_batch)
File "/home/minkyuha/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 357, in call
result = self.forward(*input, **kwargs)
File "/home/minkyuha/new-jtnn/icml18-jtnn/jtnn/jtnn_enc.py", line 62, in forward
cur_h_nei = torch.cat(cur_h_nei, dim=0).view(-1,MAX_NB,self.hidden_size)
RuntimeError: invalid argument 2: size '[-1 x 8 x 420]' is invalid for input with 144900 elements at /opt/conda/conda-bld/pytorch_1523240155148/work/torch/lib/TH/THStorage.c:37
The text was updated successfully, but these errors were encountered:
I've got the same issue when I tried to start training with my own dataset.
However, increasing global parameter MAX_NB doesn't fit in my case.
Have anyone solve this issue in another way?
Dear Wengong Jin
I'd like to ask your help about molopt during running pretrain.py
I have successfully done all of example in molopt with data/train.txt , data/vocab.txt, data/train.logP-SA.
However RuntimeError has occurred with my own training dataset , vocab generated with python ../jtnn/mol_tree.py < my_dataset.txt and my own logP property file.
It seems to be wrong dimension during node aggregation.
What's your opinion about this issue ?
Best Regards, Minkyu Ha
( environment is same with you. python 2.7, cuda 8.0, pytorch 0.3.1)
Model #Params: 4271K
Traceback (most recent call last):
File "pretrain.py", line 69, in
loss, kl_div, wacc, tacc, sacc, dacc, pacc = model(batch, beta=0)
File "/home/minkyuha/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 357, in call
result = self.forward(*input, **kwargs)
File "/home/minkyuha/new-jtnn/icml18-jtnn/jtnn/jtprop_vae.py", line 76, in forward
tree_mess, tree_vec, mol_vec = self.encode(mol_batch)
File "/home/minkyuha/new-jtnn/icml18-jtnn/jtnn/jtprop_vae.py", line 57, in encode
tree_mess,tree_vec = self.jtnn(root_batch)
File "/home/minkyuha/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 357, in call
result = self.forward(*input, **kwargs)
File "/home/minkyuha/new-jtnn/icml18-jtnn/jtnn/jtnn_enc.py", line 62, in forward
cur_h_nei = torch.cat(cur_h_nei, dim=0).view(-1,MAX_NB,self.hidden_size)
RuntimeError: invalid argument 2: size '[-1 x 8 x 420]' is invalid for input with 144900 elements at /opt/conda/conda-bld/pytorch_1523240155148/work/torch/lib/TH/THStorage.c:37
The text was updated successfully, but these errors were encountered: