-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ImportError, libtorch_cuda.so: undefined symbol #5
Comments
Hi, There are some issues for the installation of pytorch.
|
@Layne-Huang Thank you for your help! The error disappeared after reinstalling pytorch. However, another error message appeared. When I used [2024-04-02 13:28:17,151::test::INFO] Namespace(pdb_path='protein/mypro_nolig.pdb', num_atom=29, build_method='reconstruct', config=None, cuda=True, ckpt='500.pt', save_traj=False, num_samples=100, batch_size=100, resume=None, tag='', clip=1000.0, n_steps=1000, global_start_sigma=inf, w_global_pos=1.0, w_local_pos=1.0, w_global_node=1.0, w_local_node=1.0, sampling_type='generalized', eta=1.0) [2024-04-02 13:28:17,151::test::INFO] {'model': {'type': 'diffusion', 'network': 'MDM_full_pocket_coor_shared', 'hidden_dim': 128, 'protein_hidden_dim': 128, 'num_convs': 3, 'num_convs_local': 3, 'protein_num_convs': 2, 'cutoff': 3.0, 'g_cutoff': 6.0, 'encoder_cutoff': 6.0, 'time_emb': True, 'atom_num_emb': False, 'mlp_act': 'relu', 'beta_schedule': 'sigmoid', 'beta_start': 1e-07, 'beta_end': 0.002, 'num_diffusion_timesteps': 1000, 'edge_order': 3, 'edge_encoder': 'mlp', 'smooth_conv': False, 'num_layer': 9, 'feats_dim': 5, 'soft_edge': True, 'norm_coors': True, 'm_dim': 128, 'context': 'None', 'vae_context': False, 'num_atom': 10, 'protein_feature_dim': 31}, 'train': {'seed': 2021, 'batch_size': 16, 'val_freq': 250, 'max_iters': 500, 'max_grad_norm': 10.0, 'num_workers': 4, 'anneal_power': 2.0, 'optimizer': {'type': 'adam', 'lr': 0.001, 'weight_decay': 0.0, 'beta1': 0.95, 'beta2': 0.999}, 'scheduler': {'type': 'plateau', 'factor': 0.6, 'patience': 10, 'min_lr': 1e-06}, 'transform': {'mask': {'type': 'mixed', 'min_ratio': 0.0, 'max_ratio': 1.2, 'min_num_masked': 1, 'min_num_unmasked': 0, 'p_random': 0.5, 'p_bfs': 0.25, 'p_invbfs': 0.25}, 'contrastive': {'num_real': 50, 'num_fake': 50, 'pos_real_std': 0.05, 'pos_fake_std': 2.0}}}, 'dataset': {'name': 'crossdock', 'type': 'pl', 'path': './data/crossdocked_pocket10', 'split': './data/split_by_name.pt'}} [2024-04-02 13:28:17,151::test::INFO] Loading crossdock data... |
yep, I got the same error as I just tried this procedure. |
I have revised this file. It could work now. |
Thanks1 @Layne-Huang I did a quick test and got this error below:
|
Mine showed the same error as @wenchangzhou-qtx said. The error message were shown below:
|
Hi, I have updated If you meet this error: the "self.propagate(edge_index, x=feats, edge_attr=edge_attr_feats,)", please degrade your torch_geometry version to 2.4.0. Please let me know if there is any other issue. |
@Layne-Huang NICE! It's working for me now. |
@Layne-Huang Thank you for your help. The error was solved, but another error message showed as mentioned in #9. And I followed the suggestion to choose only residues within 10 angstroms of the ligand as protein and another error message showed as you mentioned
I'm not an expert in these, and after searching for the solution, maybe it's related to the CUDA version? I'm using CUDA 12.2 now. |
I think the code in our file should be |
@Layne-Huang Thank you! I have updated the script and now it works. However, the sampled molecules didn't seem fine. The output of sampling for my pocket file is shown below:
Can you please help with it? Thank you! |
Could you please send me your pdb file? I will test it.
Best regards,
Lei
________________________________
From: will ***@***.***>
Sent: Tuesday, April 9, 2024 2:10:50 AM
To: Layne-Huang/PMDM ***@***.***>
Cc: Layne_Huang ***@***.***>; Mention ***@***.***>
Subject: Re: [Layne-Huang/PMDM] ImportError, libtorch_cuda.so: undefined symbol (Issue #5)
@Layne-Huang<https://github.com/Layne-Huang> Thank you! I have updated the script and now it works. However, the sampled molecules didn't seem fine. The output of sampling for my pocket file is shown below:
python -u sample_for_pdb.py --ckpt 500.pt --pdb_path pro/pro_chainB_pocket.pdb --num_atom 70 --num_samples 10 --sampling_type generalized
sh: 1: module: not found
Entropy of n_nodes: H[N] -1.3862943649291992
[2024-04-09 13:51:59,066::test::INFO] Namespace(pdb_path='8etr/7PZC_chainB_hbondopt-pocket.pdb', sdf_path=None, num_atom=70, build_method='reconstruct', config=None, cuda=True, ckpt='500.pt', save_traj=False, num_samples=10, batch_size=10, resume=None, tag='', clip=1000.0, n_steps=1000, global_start_sigma=inf, w_global_pos=1.0, w_local_pos=1.0, w_global_node=1.0, w_local_node=1.0, sampling_type='generalized', eta=1.0)
[2024-04-09 13:51:59,066::test::INFO] {'model': {'type': 'diffusion', 'network': 'MDM_full_pocket_coor_shared', 'hidden_dim': 128, 'protein_hidden_dim': 128, 'num_convs': 3, 'num_convs_local': 3, 'protein_num_convs': 2, 'cutoff': 3.0, 'g_cutoff': 6.0, 'encoder_cutoff': 6.0, 'time_emb': True, 'atom_num_emb': False, 'mlp_act': 'relu', 'beta_schedule': 'sigmoid', 'beta_start': 1e-07, 'beta_end': 0.002, 'num_diffusion_timesteps': 1000, 'edge_order': 3, 'edge_encoder': 'mlp', 'smooth_conv': False, 'num_layer': 9, 'feats_dim': 5, 'soft_edge': True, 'norm_coors': True, 'm_dim': 128, 'context': 'None', 'vae_context': False, 'num_atom': 10, 'protein_feature_dim': 31}, 'train': {'seed': 2021, 'batch_size': 16, 'val_freq': 250, 'max_iters': 500, 'max_grad_norm': 10.0, 'num_workers': 4, 'anneal_power': 2.0, 'optimizer': {'type': 'adam', 'lr': 0.001, 'weight_decay': 0.0, 'beta1': 0.95, 'beta2': 0.999}, 'scheduler': {'type': 'plateau', 'factor': 0.6, 'patience': 10, 'min_lr': 1e-06}, 'transform': {'mask': {'type': 'mixed', 'min_ratio': 0.0, 'max_ratio': 1.2, 'min_num_masked': 1, 'min_num_unmasked': 0, 'p_random': 0.5, 'p_bfs': 0.25, 'p_invbfs': 0.25}, 'contrastive': {'num_real': 50, 'num_fake': 50, 'pos_real_std': 0.05, 'pos_fake_std': 2.0}}}, 'dataset': {'name': 'crossdock', 'type': 'pl', 'path': './data/crossdocked_pocket10', 'split': './data/split_by_name.pt'}}
[2024-04-09 13:51:59,066::test::INFO] Loading crossdock data...
Entropy of n_nodes: H[N] -3.543935775756836
[2024-04-09 13:51:59,066::test::INFO] Loading data...
[2024-04-09 13:51:59,105::test::INFO] Building model...
[2024-04-09 13:51:59,105::test::INFO] MDM_full_pocket_coor_shared
{'type': 'diffusion', 'network': 'MDM_full_pocket_coor_shared', 'hidden_dim': 128, 'protein_hidden_dim': 128, 'num_convs': 3, 'num_convs_local': 3, 'protein_num_convs': 2, 'cutoff': 3.0, 'g_cutoff': 6.0, 'encoder_cutoff': 6.0, 'time_emb': True, 'atom_num_emb': False, 'mlp_act': 'relu', 'beta_schedule': 'sigmoid', 'beta_start': 1e-07, 'beta_end': 0.002, 'num_diffusion_timesteps': 1000, 'edge_order': 3, 'edge_encoder': 'mlp', 'smooth_conv': False, 'num_layer': 9, 'feats_dim': 5, 'soft_edge': True, 'norm_coors': True, 'm_dim': 128, 'context': 'None', 'vae_context': False, 'num_atom': 10, 'protein_feature_dim': 31}
sdf idr: 8etr/generate_ref
Entropy of n_nodes: H[N] -3.543935775756836
100%|███████████████████████████████████████████████████████| 2/2 [00:00<00:00, 202.81it/s]
0%| | 0/2 [00:00<?, ?it/s]1
/media/data/software/conda/PMDM-main/models/common.py:485: UserWarning: torch.sparse.SparseTensor(indices, values, shape, *, device=) is deprecated. Please use torch.sparse_coo_tensor(indices, values, shape, dtype=, device=). (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:618.)
bgraph_adj = torch.sparse.LongTensor(
sample: 1000it [00:49, 20.28it/s]
/media/data/software/conda/PMDM-main/sample_for_pdb.py:391: DeprecationWarning: `np.long` is a deprecated alias for `np.compat.long`. To silence this warning, use `np.compat.long` by itself. In the likely event your code does not need to work on Python 2 you can use the builtin `int` for which `np.compat.long` is itself an alias. Doing this will not modify any behaviour and is safe. When replacing `np.long`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
indicators = torch.zeros([pos.size(0), len(ATOM_FAMILIES)], dtype=np.long)
generated smile: C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C#C.CC1CCC(O)C1.N.N.O.O.OO
generated smile: C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.N.O=CCO
==============================
Open Babel Warning in PerceiveBondOrders
Failed to kekulize aromatic bonds in OBMol::PerceiveBondOrders
generated smile: C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.CC1C=CC=C1.N.N.N.N
generated smile: C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C=CC.N.O
generated smile: C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C#C.N.N.N.N.N.N.O
generated smile: C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.CCC.N.N.N
generated smile: C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.N.N
generated smile: C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C#C.N.N.N
generated smile: C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C#C.CO.N.N.O
generated smile: C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.CC.O
50%|████████████████████████████ | 1/2 [00:49<00:49, 49.65s/it]1
sample: 1000it [00:49, 20.30it/s]
generated smile: C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.CO.N
generated smile: C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.N.N.O.O.OO
generated smile: C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.CC#N
generated smile: C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.N.N.N.OO
generated smile: C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.N.N.N.N
generated smile: C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C=O.CC.N.N.O
generated smile: C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.Cc1ccccc1.N.N.N.N.N.O
generated smile: C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.CO.N.N.N.N
generated smile: C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C1CCCC1.N.N.N
generated smile: C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.CO.N.N.N.N
100%|████████████████████████████████████████████████████████| 2/2 [01:38<00:00, 49.48s/it]
[2024-04-09 13:53:38,133::test::INFO] valid:20
[2024-04-09 13:53:38,133::test::INFO] stable:0
Can you please help with it? Thank you!
—
Reply to this email directly, view it on GitHub<#5 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AIJXOXA32MNFBKQPKQEAKYDY4OA6VAVCNFSM6AAAAABFL5KCL6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANBUGIYTONBRGM>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Our model is pretrained on CrossDocked dataset which the average length of the molecules is 26. There are only 6 molecules whose lengths are longer than 70 in the training set. Please decrease the |
@Layne-Huang, on this topic, is there a total number of atoms or just the part I'm trying to generate (e.g. fragment) that I should pay attention to when I tried to generate molecules? |
|
@Layne-Huang Thank you! It works now. |
Hi,
After reading your article published in NC, I have created and downloaded relative files to generate molecules for my protein. However, when I used
python -u sample_for_pdb.py --ckpt 500.pt --pdb_path protein/mypro_nolig.pdb --num_atoms 70 --num_samples 100 --sampling_type generalized
, an error message showed in the screenTraceback (most recent call last): File "/media/data/software/conda/PMDM-main/sample_for_pdb.py", line 10, in <module> from evaluation import * File "/media/data/software/conda/PMDM-main/evaluation/__init__.py", line 1, in <module> from .evaluation_metrics import * File "/media/data/software/conda/PMDM-main/evaluation/evaluation_metrics.py", line 7, in <module> import torch File "/home/anaconda3/envs/mol/lib/python3.9/site-packages/torch/__init__.py", line 235, in <module> from torch._C import * # noqa: F403 ImportError: /home/anaconda3/envs/mol/lib/python3.9/site-packages/torch/lib/libtorch_cuda.so: undefined symbol: cudaLaunchKernelExC, version libcudart.so.11.0
Can you please help me to solve this problem? Thanks!
The text was updated successfully, but these errors were encountered: