Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

M1芯片上运行出现报错🥲 #6

Closed
PCKxin opened this issue Apr 22, 2024 · 4 comments
Closed

M1芯片上运行出现报错🥲 #6

PCKxin opened this issue Apr 22, 2024 · 4 comments

Comments

@PCKxin
Copy link

PCKxin commented Apr 22, 2024

测试版本1: 3.9.6
测试版本2: 3.12.0
都是在user输入后报错

终端运行:
Traceback (most recent call last): File "/Users/pckxin/Desktop/LLama3CH/main.py", line 175, in <module> main() File "/Users/pckxin/Desktop/LLama3CH/main.py", line 153, in main outputs = model.generate( ^^^^^^^^^^^^^^^ File "/Users/pckxin/Desktop/LLama3CH/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/Users/pckxin/Desktop/LLama3CH/.venv/lib/python3.12/site-packages/transformers/generation/utils.py", line 1622, in generate result = self._sample( ^^^^^^^^^^^^^ File "/Users/pckxin/Desktop/LLama3CH/.venv/lib/python3.12/site-packages/transformers/generation/utils.py", line 2847, in _sample unfinished_sequences = unfinished_sequences & ~stopping_criteria(input_ids, scores) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/pckxin/Desktop/LLama3CH/.venv/lib/python3.12/site-packages/transformers/generation/stopping_criteria.py", line 158, in __call__ is_done = is_done | criteria(input_ids, scores, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/pckxin/Desktop/LLama3CH/.venv/lib/python3.12/site-packages/transformers/generation/stopping_criteria.py", line 149, in __call__ is_done = torch.isin(input_ids[:, -1], self.eos_token_id.to(input_ids.device)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ NotImplementedError: The operator 'aten::isin.Tensor_Tensor_out' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable PYTORCH_ENABLE_MPS_FALLBACK=1 to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.

网页运行:
File "/Users/pckxin/Desktop/LLama3CH/.venv/lib/python3.12/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 584, in _run_script exec(code, module.__dict__) File "/Users/pckxin/Desktop/LLama3CH/deploy/web_streamlit_for_v1.py", line 307, in <module> main(model_name_or_path, adapter_name_or_path) File "/Users/pckxin/Desktop/LLama3CH/deploy/web_streamlit_for_v1.py", line 281, in main for cur_response in generate_interactive( File "/Users/pckxin/Desktop/LLama3CH/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 35, in generator_context response = gen.send(None) ^^^^^^^^^^^^^^ File "/Users/pckxin/Desktop/LLama3CH/deploy/web_streamlit_for_v1.py", line 51, in generate_interactive inputs[k] = v.cuda() ^^^^^^^^ File "/Users/pckxin/Desktop/LLama3CH/.venv/lib/python3.12/site-packages/torch/cuda/__init__.py", line 293, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled")

@PCKxin PCKxin closed this as completed Apr 22, 2024
@CrazyBoyM
Copy link
Owner

解决了吗,还没实验mac本地运行,如果成功了欢迎PR分享一份心得文档 ~

@PCKxin
Copy link
Author

PCKxin commented Apr 22, 2024

没解决, 用gpt修改完运行是不报错了, 跑了15分钟一个字没蹦出来就关了,需要的话我可以把修改完的代码贴出来

@CrazyBoyM
Copy link
Owner

CrazyBoyM commented Apr 22, 2024 via email

@PCKxin
Copy link
Author

PCKxin commented Apr 22, 2024

copy 我试试去

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants