Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem to run demo.py #7

Closed
vitorom-01 opened this issue Jan 9, 2023 · 10 comments
Closed

Problem to run demo.py #7

vitorom-01 opened this issue Jan 9, 2023 · 10 comments

Comments

@vitorom-01
Copy link

After following all the command in the sction "Installation" I tried to run a demo with command:
"cd ./GLAM/src_1gp
python3 demo.py"
but there is this error:
(GLAM) vito@vito-HP-ENVY-15-Notebook-PC:~/project/GLAM/src_1gp$ python3 demo.py
Traceback (most recent call last):
File "demo.py", line 2, in
os.chdir(os.path.dirname(file))
FileNotFoundError: [Errno 2] No such file or directory: ''

I don't know what's wrong and how can I resolve this problem

@yvquanli
Copy link
Owner

This is because the directory change command run with error, you can delete this line

os.chdir(os.path.dirname(__file__))

and try again

@vitorom-01
Copy link
Author

vitorom-01 commented Jan 11, 2023 via email

@yvquanli
Copy link
Owner

This is because your torch_sparse installed failed.
Please show me your version of torch and cudatoolkit.
Please do not install the latest version of them

@yvquanli
Copy link
Owner

We find a problem with the enviroment installation, this may because the problem of torch_sparse

rusty1s/pytorch_sparse#207

Anyway, we will give the new enviromental config later.

@yvquanli
Copy link
Owner

Thank you for your issue up, all problems are solved!

@vitorom-01
Copy link
Author

vitorom-01 commented Jan 13, 2023

demo.py correctly run!
So I've tried the command:
python3 run.py --epochs 1

(GLAM) user@user-HP-ENVY-15-Notebook-PC:~/GLAM/src_1gp$ python3 run.py --epochs 1
Loading dataset...
Training init...
################################################################################
dataset_root:../../Dataset/GLAM-GP
dataset:esol
split:random
seed:1234
split_seed:1234
gpu:0
note:None2
hid_dim_alpha:4
mol_block:_NNConv
e_dim:1024
out_dim:1
message_steps:3
mol_readout:GlobalPool5
pre_norm:_None
graph_norm:_PairNorm
flat_norm:_None
end_norm:_None
pre_do:_None()
graph_do:_None()
flat_do:Dropout(0.2)
end_do:Dropout(0.2)
pre_act:RReLU
graph_act:RReLU
flat_act:RReLU
graph_res:1
batch_size:32
epochs:1
loss:mse
optim:Adam
k:6
lr:0.001
lr_reduce_rate:0.7
lr_reduce_patience:20
early_stop_patience:50
verbose_patience:500
################################################################################
save id: 2023-01-13_14:20:41.944_seed_1234
run device: cpu
train set num:904 valid set num:112 test set num: 112
total parameters:454789
################################################################################
Architecture(
(mol_lin0): LinearBlock(
(norm): _None()
(dropout): _None()
(linear): Linear(in_features=15, out_features=60, bias=True)
(act): RReLU(lower=0.125, upper=0.3333333333333333)
)
(mol_conv): MessageBlock(
(norm): _PairNorm(
(norm): PairNorm()
)
(dropout): _None()
(conv): _NNConv(
(conv): NNConv(60, 60, aggr="mean", nn=Sequential(
(0): Linear(in_features=4, out_features=32, bias=True)
(1): ReLU()
(2): Linear(in_features=32, out_features=3600, bias=True)
))
)
(gru): GRU(60, 60)
(act): RReLU(lower=0.125, upper=0.3333333333333333)
)
(mol_readout): GlobalPool5()
(mol_flat): LinearBlock(
(norm): _None()
(dropout): Dropout(p=0.2, inplace=False)
(linear): Linear(in_features=300, out_features=1024, bias=True)
(act): RReLU(lower=0.125, upper=0.3333333333333333)
)
(lin_out1): LinearBlock(
(norm): _None()
(dropout): Dropout(p=0.2, inplace=False)
(linear): Linear(in_features=1024, out_features=1, bias=True)
(act): _None()
)
)
################################################################################
Training start...
0%| | 0/1 [00:00<?, ?it/s]batch 0 training loss: 8.70186 time elapsed 0.00 hrs (0.0 mins)
Epoch:0 trn_loss:13.60003 val_loss:3.30766 val_result:{'ci': 0.7748146954560103, 'mse': 3.4477494, 'rmse': 1.8568116157265382, 'r2': 0.32129932361229274} lr_cur:0.0010000 time elapsed 0.00 hrs (0.0 mins)
Model saved at epoch 0
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.25s/it]
Model saved at epoch 0
Testing...
The best ckpt is /home/user/GLAM/src_1gp/log_esol/2023-01-13_14:20:41.944_seed_1234/best_save.ckpt
Ckpt loading: /home/user/GLAM/src_1gp/log_esol/2023-01-13_14:20:41.944_seed_1234/best_save.ckpt
{'dataset_root': '../../Dataset/GLAM-GP', 'dataset': 'esol', 'split': 'random', 'seed': 1234, 'split_seed': 1234, 'gpu': 0, 'note': 'None2', 'hid_dim_alpha': 4, 'mol_block': '_NNConv', 'e_dim': 1024, 'out_dim': 1, 'message_steps': 3, 'mol_readout': 'GlobalPool5', 'pre_norm': '_None', 'graph_norm': '_PairNorm', 'flat_norm': '_None', 'end_norm': '_None', 'pre_do': '_None()', 'graph_do': '_None()', 'flat_do': 'Dropout(0.2)', 'end_do': 'Dropout(0.2)', 'pre_act': 'RReLU', 'graph_act': 'RReLU', 'flat_act': 'RReLU', 'graph_res': 1, 'batch_size': 32, 'epochs': 1, 'loss': 'mse', 'optim': 'Adam', 'k': 6, 'lr': 0.001, 'lr_reduce_rate': 0.7, 'lr_reduce_patience': 20, 'early_stop_patience': 50, 'verbose_patience': 500}
{'testloss': 3.831183910369873, 'valloss': 3.3076584339141846}|{'ci': 0.7792981326464906, 'mse': 3.5419254, 'rmse': 1.8820003799940774, 'r2': 0.2003656506781354}|{'valci': 0.7748146954560103, 'valmse': 3.4477494, 'valrmse': 1.8568116157265382, 'valr2': 0.32129932361229274}

Is the file running correctly?
In case in which run.py doesn't running correctly, can you please write how to put all the "Dataset" in the "Full structure of workplace" more clearly? Can you tell me in more detail where to put them please?
Thanks so much for your work

@yvquanli
Copy link
Owner

Yes, it running correctly

@yvquanli
Copy link
Owner

OK, I will write the dataset guide clearly, with more details

@vitorom-01
Copy link
Author

Thank you so much for your work!
I have an other question, in which way can I put the inputs?
Is there a command that can I use or a guide for example that explains how to put the molecules to study in input?

@yvquanli
Copy link
Owner

yvquanli commented Jan 14, 2023

This may be a bit tricky, but you can try this idea
Firstly, you should process your data to PYG dataset as GLAM
and then you can try this pesudo code

from torch_geometric.data import DataLoader
test_dataloader=DataLoader(YOUR_PYG_DATASET)

trainer.load_best_ckpt()
trainer.test_dataloader = test_dataloader
trainer.valid_iterations(mode='inference')

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants