Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

where is the docked pose? #4

Open
mycode-bit opened this issue Mar 3, 2022 · 8 comments
Open

where is the docked pose? #4

mycode-bit opened this issue Mar 3, 2022 · 8 comments

Comments

@mycode-bit
Copy link

Here me again, I got another question.

I am trying to repeat from your test_sets_pdb. But in the folder of test_sets_pdb/db5_equidock_results, I did not see docked complex structure. where is it?

in the folder of db5_test_random_transformed, there is one called complexes. If we are docking, why do we need the complex anyway.

Your help is greatly appreciated.

@octavian-ganea
Copy link
Owner

The ligand is docked to the bound receptor, so only the final ligand file is the output. See https://github.com/octavian-ganea/equidock_public/blob/main/src/inference_rigid.py#L136 .

The initial bound ligand is not needed for inference alone, I had it in the complexes folder for evaluation purposes only.

@mycode-bit
Copy link
Author

thanks for your explanation. Now I got another error when I run inference_rigid.py.
is that coming from dependency?
I cannot figure out what is wrong.
thanks in advance for help.


Parsing args
Available GPUS:0
EQUIDOCK__drp_0.0#Wdec_0.0001#ITS_lw_10.0#Hdim_64#Nlay_5#shrdLay_F#SURFfs_F#ln_LN#lnX_0#Hnrm_0#NattH_50#skH_0.5#xConnI_0.0#LkySl_0.01#pokOTw_1.0#divXdist_F#
[2022-03-07 07:24:44.417504] Model name ===> EQUIDOCK__drp_0.0#Wdec_0.0001#ITS_lw_10.0#Hdim_64#Nlay_5#shrdLay_F#SURFfs_F#ln_LN#lnX_0#Hnrm_0#NattH_50#skH_0.5#xConnI_0.0#LkySl_0.01#pokOTw_1.0#divXdist_F#
checkpoint_filename = checkpts/oct20_Wdec_0.001#ITS_lw_10.0#Hdim_64#Nlay_5#shrdLay_T#ln_LN#lnX_0#Hnrm_0#NattH_50#skH_0.5#xConnI_0.0#LkySl_0.01#pokOTw_1.0#fine_F#/db5_model_best.pth
[2022-03-07 07:24:44.484702] Number of parameters = 525,671
LN 0 0 10.0
divide_coors_dist = False
inference on file = ./test_sets_pdb/db5_test_random_transformed/random_transformed/3SZK_l_b.pdb
Traceback (most recent call last):
File "src/inference_rigid.py", line 147, in
main(args)
File "src/inference_rigid.py", line 123, in main
all_rotation_list, all_translation_list = model(batch_hetero_graph, epoch=0)
File "/home/user/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/user/equidock_public-main/src/model/rigid_docking_model.py", line 647, in forward
outputs = iegmn(batch_hetero_graph, epoch)
File "/home/user/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/user/equidock_public-main/src/model/rigid_docking_model.py", line 490, in forward
h_feats_receptor = layer(hetero_graph=batch_hetero_graph,
File "/home/user/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/user/equidock_public-main/src/model/rigid_docking_model.py", line 274, in forward
hetero_graph.update_all(fn.copy_edge('x_moment', 'm'), fn.mean('m', 'x_update'),
File "/home/user/anaconda3/lib/python3.8/site-packages/dgl/heterograph.py", line 4876, in update_all
ndata = core.message_passing(g, message_func, reduce_func, apply_node_func)
File "/home/user/anaconda3/lib/python3.8/site-packages/dgl/core.py", line 357, in message_passing
ndata = invoke_gspmm(g, mfunc, rfunc)
File "/home/user/anaconda3/lib/python3.8/site-packages/dgl/core.py", line 332, in invoke_gspmm
z = op(graph, x)
File "/home/user/anaconda3/lib/python3.8/site-packages/dgl/ops/spmm.py", line 191, in func
return gspmm(g, 'copy_rhs', reduce_op, None, x)
File "/home/user/anaconda3/lib/python3.8/site-packages/dgl/ops/spmm.py", line 75, in gspmm
ret = gspmm_internal(g._graph, op,
File "/home/user/anaconda3/lib/python3.8/site-packages/dgl/backend/pytorch/sparse.py", line 757, in gspmm
return GSpMM.apply(gidx, op, reduce_op, lhs_data, rhs_data)
File "/home/user/anaconda3/lib/python3.8/site-packages/torch/cuda/amp/autocast_mode.py", line 94, in decorate_fwd
return fwd(*args, **kwargs)
File "/home/user/anaconda3/lib/python3.8/site-packages/dgl/backend/pytorch/sparse.py", line 126, in forward
out, (argX, argY) = _gspmm(gidx, op, reduce_op, X, Y)
File "/home/user/anaconda3/lib/python3.8/site-packages/dgl/sparse.py", line 228, in _gspmm
_CAPI_DGLKernelSpMM(gidx, op, reduce_op,
File "dgl/_ffi/_cython/./function.pxi", line 287, in dgl._ffi._cy3.core.FunctionBase.call
File "dgl/_ffi/_cython/./function.pxi", line 232, in dgl._ffi._cy3.core.FuncCall
File "dgl/_ffi/_cython/./base.pxi", line 155, in dgl._ffi._cy3.core.CALL
dgl._ffi.base.DGLError: [07:24:49] /opt/dgl/src/array/cpu/./spmm_blocking_libxsmm.h:267: Failed to generate libxsmm kernel for the SpMM operation!
Stack trace:
[bt] (0) /home/user/anaconda3/lib/python3.8/site-packages/dgl/libdgl.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x4f) [0x7fdc8e6b743f]
[bt] (1) /home/user/anaconda3/lib/python3.8/site-packages/dgl/libdgl.so(void dgl::aten::cpu::SpMMRedopCsrOpt<int, float, dgl::aten::cpu::op::CopyRhs, dgl::aten::cpu::op::Add >(dgl::BcastOff const&, dgl::aten::CSRMatrix const&, dgl::runtime::NDArray, dgl::runtime::NDArray, dgl::runtime::NDArray, dgl::runtime::NDArray, dgl::runtime::NDArray)+0x3bc) [0x7fdc8e8e9d5c]
[bt] (2) /home/user/anaconda3/lib/python3.8/site-packages/dgl/libdgl.so(void dgl::aten::cpu::SpMMSumCsrLibxsmm<int, float, dgl::aten::cpu::op::CopyRhs >(dgl::BcastOff const&, dgl::aten::CSRMatrix const&, dgl::runtime::NDArray, dgl::runtime::NDArray, dgl::runtime::NDArray)+0x73) [0x7fdc8e8e9e03]
[bt] (3) /home/user/anaconda3/lib/python3.8/site-packages/dgl/libdgl.so(void dgl::aten::cpu::SpMMSumCsr<int, float, dgl::aten::cpu::op::CopyRhs >(dgl::BcastOff const&, dgl::aten::CSRMatrix const&, dgl::runtime::NDArray, dgl::runtime::NDArray, dgl::runtime::NDArray)+0x146) [0x7fdc8e915106]
[bt] (4) /home/user/anaconda3/lib/python3.8/site-packages/dgl/libdgl.so(void dgl::aten::SpMMCsr<1, int, 32>(std::string const&, std::string const&, dgl::BcastOff const&, dgl::aten::CSRMatrix const&, dgl::runtime::NDArray, dgl::runtime::NDArray, dgl::runtime::NDArray, std::vector<dgl::runtime::NDArray, std::allocatordgl::runtime::NDArray >)+0xfeb) [0x7fdc8e921d2b]
[bt] (5) /home/user/anaconda3/lib/python3.8/site-packages/dgl/libdgl.so(dgl::aten::SpMM(std::string const&, std::string const&, std::shared_ptrdgl::BaseHeteroGraph, dgl::runtime::NDArray, dgl::runtime::NDArray, dgl::runtime::NDArray, std::vector<dgl::runtime::NDArray, std::allocatordgl::runtime::NDArray >)+0x1004) [0x7fdc8e95a8e4]
[bt] (6) /home/user/anaconda3/lib/python3.8/site-packages/dgl/libdgl.so(+0x46a098) [0x7fdc8e96f098]
[bt] (7) /home/user/anaconda3/lib/python3.8/site-packages/dgl/libdgl.so(+0x46a631) [0x7fdc8e96f631]
[bt] (8) /home/user/anaconda3/lib/python3.8/site-packages/dgl/libdgl.so(DGLFuncCall+0x48) [0x7fdc8e9c28b8]

@octavian-ganea
Copy link
Owner

This looks like a DGL error. Do you have the same package versions as stated in our README ?

@mycode-bit
Copy link
Author

now it is dgl 0.7, initially I installed 0.8, but now I downgraded to 0.7 again. error still shows up.

@octavian-ganea
Copy link
Owner

all packages need to be the same versions , not just DGL. Can you show your packages versions for those listed in the README ?

@octavian-ganea
Copy link
Owner

also, fyi, this code was not tested on Windows

@mycode-bit
Copy link
Author

Here is the list, I guess rdkit does not match.

python==3.8.8
numpy==1.20.1
cuda==10.1
torch==1.10.2
dgl==0.7.0
biopandas==0.2.8
ot==0.7.0
rdkit==2019.09.3
dgllife==0.2.8
joblib==1.1.0

@octavian-ganea
Copy link
Owner

it really looks to me like a DGL error. Can you try testing the hetero_graph.update_all() function in a separate toy code ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants