You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for the great work and for providing your code and checkpoint for the followers to reproduce 3DLinker.
First, I follow your instruction in the README to reproduce the experimental results in the paper.
Specifically, I use the "pretrained_model.pickle" in the [check_points] for run the command python3 main.py --dataset zinc --config-file test_config.json --generation True --load_cpt ./check_points/pretrained_model.pickle to generate the candidates, and python3 evaluate_generated_mols.py ZINC ../generated_samples/generated_smiles_zinc.smi ../zinc/smi_train.txt 1 True None ./wehi_pains.csv to evaluate the generated candidates.
But I get the results as follows:
Pass all 2D filters: 0.00%
Valid and pass all 2D filters: 0.00%
Pass synthetic accessibility (SA) filter: 0.00%
Pass ring aromaticity filter: 0.00%
Pass SA and ring filters: 0.00%
Pass PAINS filters: 0.00%
Aveage RMSD is 0.068152
Is there anything wrong with my practice?
Second, I notice there is a step for processing raw graphs of train data while initializing the "Linker(args)."
How much time does it need to process ZINC data? In my env, around 4 hours. Is it normal? Can we do preprocess just once and save it before initializing the Linker?
Third, I find it may take a very long time to re-train the 3DLinker. Are there more statistics about re-implementing the 3DLinker, such as training environment, batch_size, epochs, and training time?
I really appreciate any help you can provide.
Thank you in advance!
Best wishes
Yu
The text was updated successfully, but these errors were encountered:
Hi, Yinan
Thanks for the great work and for providing your code and checkpoint for the followers to reproduce 3DLinker.
First, I follow your instruction in the README to reproduce the experimental results in the paper.
Specifically, I use the "pretrained_model.pickle" in the [check_points] for run the command
python3 main.py --dataset zinc --config-file test_config.json --generation True --load_cpt ./check_points/pretrained_model.pickle
to generate the candidates, andpython3 evaluate_generated_mols.py ZINC ../generated_samples/generated_smiles_zinc.smi ../zinc/smi_train.txt 1 True None ./wehi_pains.csv
to evaluate the generated candidates.But I get the results as follows:
Is there anything wrong with my practice?
Second, I notice there is a step for processing raw graphs of train data while initializing the "Linker(args)."
How much time does it need to process ZINC data? In my env, around 4 hours. Is it normal? Can we do preprocess just once and save it before initializing the Linker?
Third, I find it may take a very long time to re-train the 3DLinker. Are there more statistics about re-implementing the 3DLinker, such as training environment, batch_size, epochs, and training time?
I really appreciate any help you can provide.
Thank you in advance!
Best wishes
Yu
The text was updated successfully, but these errors were encountered: