Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transformer version? TypeError: _build_causal_attention_mask() missing 1 required positional argument: 'dtype' #3

Open
ImneCurline opened this issue Apr 8, 2023 · 0 comments

Comments

@ImneCurline
Copy link

ImneCurline commented Apr 8, 2023

python == 3.9
torch : 2.0.0

When I run the sample code, I get the above error as the title
I found the corresponding position (a transformer's method)and added dtype="double".

Then I get the following error:
File "/root/anaconda3/envs/point_e_env/lib/python3.9/site-packages/transformers/models/clip/modeling_clip.py", line 758, in _build_causal_attention_mask
mask = torch.empty(bsz, seq_len, seq_len, dtype=dtype)
TypeError: empty() received an invalid combination of arguments - got (int, int, int, dtype=str), but expected one of:

  • (tuple of ints size, *, tuple of names names, torch.memory_format memory_format, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
  • (tuple of ints size, *, torch.memory_format memory_format, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)

So is it because of the version of the transformer? And how to fix it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant