Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Get a valueError when I try to run the helixfold-single. #219

Closed
yangjinhaoo opened this issue Sep 27, 2022 · 10 comments
Closed

Get a valueError when I try to run the helixfold-single. #219

yangjinhaoo opened this issue Sep 27, 2022 · 10 comments

Comments

@yangjinhaoo
Copy link

I use th 2.3version paddle cuda 11.2 linux docker.
I solved the dependency according to the readme.
And I download the official init model. But when I run the code , I got the valueError.
The code is PaddleHelix/apps/protein_folding/helixfold-single/helixfold_single_inference.py

like robin said, what's the problem?

2022-09-26 09:24:25.062647: I tensorflow/core/platform/profile_utils/cpu_utils.cc:114] CPU Frequency: 2500000000 Hz
/usr/local/lib/python3.7/dist-packages/paddle/fluid/framework.py:3623: DeprecationWarning: Op `slice` is executed through `append_op` under the dynamic mode, the corresponding API implementation needs to be upgraded to using `_C_ops` method.
  "using `_C_ops` method." % type, DeprecationWarning)
Traceback (most recent call last):
  File "helixfold_single_inference.py", line 121, in <module>
    main(args)
  File "helixfold_single_inference.py", line 106, in main
    results = model(batch, compute_loss=False)
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 929, in __call__
    return self._dygraph_call_func(*inputs, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 914, in _dygraph_call_func
    outputs = self.forward(*inputs, **kwargs)
  File "/tmp/helix/utils/model_tape.py", line 115, in forward
    batch = self._forward_tape(batch)
  File "/tmp/helix/utils/model_tape.py", line 98, in _forward_tape
    return_representations=True, return_last_n_weight=self.model_config.last_n_weight)
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 929, in __call__
    return self._dygraph_call_func(*inputs, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 914, in _dygraph_call_func
    outputs = self.forward(*inputs, **kwargs)
  File "/tmp/helix/tape/others/protein_sequence_model_dynamic.py", line 218, in forward
    return_last_n_weight=return_last_n_weight)
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 929, in __call__
    return self._dygraph_call_func(*inputs, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 914, in _dygraph_call_func
    outputs = self.forward(*inputs, **kwargs)
  File "/tmp/helix/tape/others/transformer_block.py", line 530, in forward
    is_recompute=self.training)
  File "/tmp/helix/tape/others/transformer_block.py", line 26, in recompute_wrapper
    return func(*args)
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 929, in __call__
    return self._dygraph_call_func(*inputs, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 914, in _dygraph_call_func
    outputs = self.forward(*inputs, **kwargs)
  File "/tmp/helix/tape/others/transformer_block.py", line 480, in forward
    attn_results = self.self_attn(src, src, src, src_mask, relative_pos, rel_embeddings)
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 929, in __call__
    return self._dygraph_call_func(*inputs, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 914, in _dygraph_call_func
    outputs = self.forward(*inputs, **kwargs)
  File "/tmp/helix/tape/others/transformer_block.py", line 398, in forward
    rel_att = self.disentangled_attention_bias(query_layer, key_layer, relative_pos, rel_embeddings, scale_factor)
  File "/tmp/helix/tape/others/transformer_block.py", line 367, in disentangled_attention_bias
    c2p_att = self.gather_4d(c2p_att, index=c2p_gather_idx)
  File "/tmp/helix/tape/others/transformer_block.py", line 343, in gather_4d
    stack_0 = paddle.tile(paddle.arange(start=0, end=a, step=1, dtype="float32").reshape([a, 1]), [b * c * d]).reshape([a, b, c, d]).cast(index.dtype)
  File "/usr/local/lib/python3.7/dist-packages/paddle/tensor/manipulation.py", line 3243, in reshape
    out, _ = _C_ops.reshape2(x, None, 'shape', shape)
ValueError: (InvalidArgument) The 'shape' in ReshapeOp is invalid. The input tensor X'size must be equal to the capacity of 'shape'. But received X's shape = [1, 1067237297], X's size = 1067237297, 'shape' is [1, 16, 2, 2], the capacity of 'shape' is 64.
  [Hint: Expected capacity == in_size, but received capacity:64 != in_size:1067237297.] (at /root/paddlejob/workspace/env_run/Paddle/paddle/fluid/operators/reshape_op.cc:204)
@yangjinhaoo
Copy link
Author

FROM registry.baidubce.com/paddlepaddle/paddle:2.3.2-gpu-cuda11.2-cudnn8
COPY requirements.txt /tmp/requirements.txt

WORKDIR /tmp
RUN pip install --no-cache-dir -r /tmp/requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
RUN wget https://baidu-nlp.bj.bcebos.com/PaddleHelix/HelixFold/paddlepaddle_gpu-0.0.0-cp37-cp37m-linux_x86_64.whl
RUN python -m pip install paddlepaddle_gpu-0.0.0-cp37-cp37m-linux_x86_64.whl

COPY ./ /tmp/helix/
WORKDIR /tmp/helix

forgot it , this is my dockerfile.

@yangjinhaoo
Copy link
Author

I try to change the paddle version . I use the paddle 2.3 instand of given whl in the readme .
Then I get another error .

Traceback (most recent call last):
  File "helixfold_single_inference.py", line 121, in <module>
    main(args)
  File "helixfold_single_inference.py", line 106, in main
    results = model(batch, compute_loss=False)
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 948, in __call__
    return self.forward(*inputs, **kwargs)
  File "/tmp/helix/utils/model_tape.py", line 124, in forward
    compute_loss=compute_loss)
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 948, in __call__
    return self.forward(*inputs, **kwargs)
  File "/tmp/helix/alphafold_paddle/model/modules.py", line 214, in forward
    ret = _run_single_recycling(prev, recycle_idx, compute_loss=False)
  File "/tmp/helix/alphafold_paddle/model/modules.py", line 187, in _run_single_recycling
    ensemble_representations=ensemble_representations)
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 948, in __call__
    return self.forward(*inputs, **kwargs)
  File "/tmp/helix/alphafold_paddle/model/modules.py", line 288, in forward
    representations = self.evoformer(batch0)
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 948, in __call__
    return self.forward(*inputs, **kwargs)
  File "/tmp/helix/alphafold_paddle/model/modules.py", line 1830, in forward
    is_recompute=self.training and idx >= self.config.evoformer_recompute_start_block_index)
  File "/tmp/helix/alphafold_paddle/model/modules.py", line 46, in recompute_wrapper
    return func(*args)
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 948, in __call__
    return self.forward(*inputs, **kwargs)
  File "/tmp/helix/alphafold_paddle/model/modules.py", line 1440, in forward
    msa_act, msa_mask, pair_act)
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 948, in __call__
    return self.forward(*inputs, **kwargs)
  File "/tmp/helix/alphafold_paddle/model/modules.py", line 648, in forward
    msa_act = sb_attn(msa_act, msa_act, bias, nonbatched_bias)
  File "/tmp/helix/alphafold_paddle/model/utils.py", line 240, in wrapper
    return f(*args, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 948, in __call__
    return self.forward(*inputs, **kwargs)
  File "/tmp/helix/alphafold_paddle/model/modules.py", line 472, in forward
    _, _, _, _, _, _, _, output = paddle._C_ops.fused_gate_attention(
AttributeError: module 'paddle.fluid.libpaddle.eager.ops' has no attribute 'fused_gate_attention'

what's wrong

@SuperXiang
Copy link
Contributor

Hi there, thanks for your feedback. It seems to be the wrong version of PaddlePaddle according to your environment. You can refer to this site for the suitable version.

@yangjinhaoo
Copy link
Author

ok, I follow readme file, chose the paddlepaddle_gpu-0.0.0.post112-cp37-cp37m-linux_x86_64.whl
cuda 11.2 python3.7 and linux.
it still error.

Traceback (most recent call last):
  File "helixfold_single_inference.py", line 121, in <module>
    main(args)
  File "helixfold_single_inference.py", line 106, in main
    results = model(batch, compute_loss=False)
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 948, in __call__
    return self.forward(*inputs, **kwargs)
  File "/tmp/helix/utils/model_tape.py", line 124, in forward
    compute_loss=compute_loss)
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 948, in __call__
    return self.forward(*inputs, **kwargs)
  File "/tmp/helix/alphafold_paddle/model/modules.py", line 214, in forward
    ret = _run_single_recycling(prev, recycle_idx, compute_loss=False)
  File "/tmp/helix/alphafold_paddle/model/modules.py", line 187, in _run_single_recycling
    ensemble_representations=ensemble_representations)
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 948, in __call__
    return self.forward(*inputs, **kwargs)
  File "/tmp/helix/alphafold_paddle/model/modules.py", line 288, in forward
    representations = self.evoformer(batch0)
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 948, in __call__
    return self.forward(*inputs, **kwargs)
  File "/tmp/helix/alphafold_paddle/model/modules.py", line 1830, in forward
    is_recompute=self.training and idx >= self.config.evoformer_recompute_start_block_index)
  File "/tmp/helix/alphafold_paddle/model/modules.py", line 46, in recompute_wrapper
    return func(*args)
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 948, in __call__
    return self.forward(*inputs, **kwargs)
  File "/tmp/helix/alphafold_paddle/model/modules.py", line 1440, in forward
    msa_act, msa_mask, pair_act)
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 948, in __call__
    return self.forward(*inputs, **kwargs)
  File "/tmp/helix/alphafold_paddle/model/modules.py", line 648, in forward
    msa_act = sb_attn(msa_act, msa_act, bias, nonbatched_bias)
  File "/tmp/helix/alphafold_paddle/model/utils.py", line 240, in wrapper
    return f(*args, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/layers.py", line 948, in __call__
    return self.forward(*inputs, **kwargs)
  File "/tmp/helix/alphafold_paddle/model/modules.py", line 472, in forward
    _, _, _, _, _, _, _, output = paddle._C_ops.fused_gate_attention(
AttributeError: module 'paddle.fluid.libpaddle.eager.ops' has no attribute 'fused_gate_attention'

do you have another advice?
or maybe you should try to run helixfold-single in the office docker , I think u can repeat it.

@SuperXiang
Copy link
Contributor

There's an update on PaddlePaddle framework and some ops should change. I have updated the scripts related to this issue. You can also try adding the following lines to the head of the script modules.py.

try:
    from paddle import _legacy_C_ops as _C_ops
except:
    from paddle import _C_ops

And then change the ops paddle._C_ops to _C_ops.

@yangjinhaoo
Copy link
Author

thank you! I will try and give you the feedback.

@yangjinhaoo
Copy link
Author

thank you very much ,it works!
best wishes for you

@SuperXiang
Copy link
Contributor

Sounds good. I will close this issue for now. Please feel free to reopen if you have other questions later.

@yangjinhaoo
Copy link
Author

Sounds good. I will close this issue for now. Please feel free to reopen if you have other questions later.

sorry guys, I come here again.
I delete my docker with some reason. Then I try to build a new image, I install the dependencies. I chose the right docker image. And I reinstall the paddle like I do before .
paddlepaddle_gpu-0.0.0.post112-cp37-cp37m-linux_x86_64.whl

In the last, I get this error,

  File "/tmp/helix/alphafold_paddle/model/modules.py", line 478, in forward
    self.output_w, self.output_b, 'has_gating', self.config.gating, 'merge_qkv', self.merge_qkv)
RuntimeError: (NotFound) Operator fused_gate_attention does not have kernel for {data_type[float]; data_layout[Undefined(AnyLayout)]; place[Place(cpu)]; library_type[PLAIN]}.
  [Hint: Expected kernel_iter != kernels.end(), but received kernel_iter == kernels.end().] (at /paddle/paddle/fluid/imperative/prepared_operator.cc:435)
  [operator < fused_gate_attention > error]

my bro, what's the problem?

@yangjinhaoo
Copy link
Author

Sounds good. I will close this issue for now. Please feel free to reopen if you have other questions later.

OK ,I know , I use nvidia docker , it works well. just like the exception show, it doesn't have kernel.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants