Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError in molopt/pretrain.py #24

Closed
MinkyuHa opened this issue Sep 11, 2018 · 2 comments
Closed

RuntimeError in molopt/pretrain.py #24

MinkyuHa opened this issue Sep 11, 2018 · 2 comments

Comments

@MinkyuHa
Copy link

Dear Wengong Jin

I'd like to ask your help about molopt during running pretrain.py
I have successfully done all of example in molopt with data/train.txt , data/vocab.txt, data/train.logP-SA.

However RuntimeError has occurred with my own training dataset , vocab generated with python ../jtnn/mol_tree.py < my_dataset.txt and my own logP property file.

It seems to be wrong dimension during node aggregation.
What's your opinion about this issue ?

Best Regards, Minkyu Ha

( environment is same with you. python 2.7, cuda 8.0, pytorch 0.3.1)

Model #Params: 4271K
Traceback (most recent call last):
File "pretrain.py", line 69, in
loss, kl_div, wacc, tacc, sacc, dacc, pacc = model(batch, beta=0)
File "/home/minkyuha/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 357, in call
result = self.forward(*input, **kwargs)
File "/home/minkyuha/new-jtnn/icml18-jtnn/jtnn/jtprop_vae.py", line 76, in forward
tree_mess, tree_vec, mol_vec = self.encode(mol_batch)
File "/home/minkyuha/new-jtnn/icml18-jtnn/jtnn/jtprop_vae.py", line 57, in encode
tree_mess,tree_vec = self.jtnn(root_batch)
File "/home/minkyuha/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 357, in call
result = self.forward(*input, **kwargs)
File "/home/minkyuha/new-jtnn/icml18-jtnn/jtnn/jtnn_enc.py", line 62, in forward
cur_h_nei = torch.cat(cur_h_nei, dim=0).view(-1,MAX_NB,self.hidden_size)
RuntimeError: invalid argument 2: size '[-1 x 8 x 420]' is invalid for input with 144900 elements at /opt/conda/conda-bld/pytorch_1523240155148/work/torch/lib/TH/THStorage.c:37

@XiuHuan-Yap
Copy link

Hi @MinkyuHa , try increasing MAX_NB global parameter in jtnn_dec.py and jtnn_enc.py.

I increased it from 8 to 32. Note that this will increase GPU memory usage.

@MinkyuHa MinkyuHa closed this as completed Oct 6, 2018
@minstar
Copy link

minstar commented Mar 15, 2020

I've got the same issue when I tried to start training with my own dataset.
However, increasing global parameter MAX_NB doesn't fit in my case.
Have anyone solve this issue in another way?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants