New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Get a valueError when I try to run the helixfold-single. #219
Comments
FROM registry.baidubce.com/paddlepaddle/paddle:2.3.2-gpu-cuda11.2-cudnn8
COPY requirements.txt /tmp/requirements.txt
WORKDIR /tmp
RUN pip install --no-cache-dir -r /tmp/requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
RUN wget https://baidu-nlp.bj.bcebos.com/PaddleHelix/HelixFold/paddlepaddle_gpu-0.0.0-cp37-cp37m-linux_x86_64.whl
RUN python -m pip install paddlepaddle_gpu-0.0.0-cp37-cp37m-linux_x86_64.whl
COPY ./ /tmp/helix/
WORKDIR /tmp/helix forgot it , this is my dockerfile. |
I try to change the paddle version . I use the paddle 2.3 instand of given whl in the readme . Traceback (most recent call last):
File "helixfold_single_inference.py", line 121, in <module>
main(args)
File "helixfold_single_inference.py", line 106, in main
results = model(batch, compute_loss=False)
File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 948, in __call__
return self.forward(*inputs, **kwargs)
File "/tmp/helix/utils/model_tape.py", line 124, in forward
compute_loss=compute_loss)
File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 948, in __call__
return self.forward(*inputs, **kwargs)
File "/tmp/helix/alphafold_paddle/model/modules.py", line 214, in forward
ret = _run_single_recycling(prev, recycle_idx, compute_loss=False)
File "/tmp/helix/alphafold_paddle/model/modules.py", line 187, in _run_single_recycling
ensemble_representations=ensemble_representations)
File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 948, in __call__
return self.forward(*inputs, **kwargs)
File "/tmp/helix/alphafold_paddle/model/modules.py", line 288, in forward
representations = self.evoformer(batch0)
File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 948, in __call__
return self.forward(*inputs, **kwargs)
File "/tmp/helix/alphafold_paddle/model/modules.py", line 1830, in forward
is_recompute=self.training and idx >= self.config.evoformer_recompute_start_block_index)
File "/tmp/helix/alphafold_paddle/model/modules.py", line 46, in recompute_wrapper
return func(*args)
File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 948, in __call__
return self.forward(*inputs, **kwargs)
File "/tmp/helix/alphafold_paddle/model/modules.py", line 1440, in forward
msa_act, msa_mask, pair_act)
File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 948, in __call__
return self.forward(*inputs, **kwargs)
File "/tmp/helix/alphafold_paddle/model/modules.py", line 648, in forward
msa_act = sb_attn(msa_act, msa_act, bias, nonbatched_bias)
File "/tmp/helix/alphafold_paddle/model/utils.py", line 240, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 948, in __call__
return self.forward(*inputs, **kwargs)
File "/tmp/helix/alphafold_paddle/model/modules.py", line 472, in forward
_, _, _, _, _, _, _, output = paddle._C_ops.fused_gate_attention(
AttributeError: module 'paddle.fluid.libpaddle.eager.ops' has no attribute 'fused_gate_attention' what's wrong |
Hi there, thanks for your feedback. It seems to be the wrong version of PaddlePaddle according to your environment. You can refer to this site for the suitable version. |
ok, I follow readme file, chose the paddlepaddle_gpu-0.0.0.post112-cp37-cp37m-linux_x86_64.whl Traceback (most recent call last):
File "helixfold_single_inference.py", line 121, in <module>
main(args)
File "helixfold_single_inference.py", line 106, in main
results = model(batch, compute_loss=False)
File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 948, in __call__
return self.forward(*inputs, **kwargs)
File "/tmp/helix/utils/model_tape.py", line 124, in forward
compute_loss=compute_loss)
File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 948, in __call__
return self.forward(*inputs, **kwargs)
File "/tmp/helix/alphafold_paddle/model/modules.py", line 214, in forward
ret = _run_single_recycling(prev, recycle_idx, compute_loss=False)
File "/tmp/helix/alphafold_paddle/model/modules.py", line 187, in _run_single_recycling
ensemble_representations=ensemble_representations)
File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 948, in __call__
return self.forward(*inputs, **kwargs)
File "/tmp/helix/alphafold_paddle/model/modules.py", line 288, in forward
representations = self.evoformer(batch0)
File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 948, in __call__
return self.forward(*inputs, **kwargs)
File "/tmp/helix/alphafold_paddle/model/modules.py", line 1830, in forward
is_recompute=self.training and idx >= self.config.evoformer_recompute_start_block_index)
File "/tmp/helix/alphafold_paddle/model/modules.py", line 46, in recompute_wrapper
return func(*args)
File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 948, in __call__
return self.forward(*inputs, **kwargs)
File "/tmp/helix/alphafold_paddle/model/modules.py", line 1440, in forward
msa_act, msa_mask, pair_act)
File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 948, in __call__
return self.forward(*inputs, **kwargs)
File "/tmp/helix/alphafold_paddle/model/modules.py", line 648, in forward
msa_act = sb_attn(msa_act, msa_act, bias, nonbatched_bias)
File "/tmp/helix/alphafold_paddle/model/utils.py", line 240, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 948, in __call__
return self.forward(*inputs, **kwargs)
File "/tmp/helix/alphafold_paddle/model/modules.py", line 472, in forward
_, _, _, _, _, _, _, output = paddle._C_ops.fused_gate_attention(
AttributeError: module 'paddle.fluid.libpaddle.eager.ops' has no attribute 'fused_gate_attention' do you have another advice? |
There's an update on PaddlePaddle framework and some ops should change. I have updated the scripts related to this issue. You can also try adding the following lines to the head of the script try:
from paddle import _legacy_C_ops as _C_ops
except:
from paddle import _C_ops And then change the ops |
thank you! I will try and give you the feedback. |
thank you very much ,it works! |
Sounds good. I will close this issue for now. Please feel free to reopen if you have other questions later. |
sorry guys, I come here again. In the last, I get this error, File "/tmp/helix/alphafold_paddle/model/modules.py", line 478, in forward
self.output_w, self.output_b, 'has_gating', self.config.gating, 'merge_qkv', self.merge_qkv)
RuntimeError: (NotFound) Operator fused_gate_attention does not have kernel for {data_type[float]; data_layout[Undefined(AnyLayout)]; place[Place(cpu)]; library_type[PLAIN]}.
[Hint: Expected kernel_iter != kernels.end(), but received kernel_iter == kernels.end().] (at /paddle/paddle/fluid/imperative/prepared_operator.cc:435)
[operator < fused_gate_attention > error] my bro, what's the problem? |
OK ,I know , I use nvidia docker , it works well. just like the exception show, it doesn't have kernel. |
I use th 2.3version paddle cuda 11.2 linux docker.
I solved the dependency according to the readme.
And I download the official init model. But when I run the code , I got the valueError.
The code is PaddleHelix/apps/protein_folding/helixfold-single/helixfold_single_inference.py
like robin said, what's the problem?
The text was updated successfully, but these errors were encountered: