We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(.venv) lizhong@lizhongdeMac-mini ChatPDF % PYTORCH_ENABLE_MPS_FALLBACK=1 CUDA_VISIBLE_DEVICES=0 python chatpdf.py --gen_model_type auto --gen_model_name 01-ai/Yi-6B-Chat --corpus_files sample.pdf Namespace(sim_model_name='shibing624/text2vec-base-multilingual', gen_model_type='auto', gen_model_name='01-ai/Yi-6B-Chat', lora_model=None, rerank_model_name='', corpus_files='sample.pdf', device=None, int4=False, int8=False, chunk_size=220, chunk_overlap=0, num_expand_context_chunk=1) 2024-03-14 11:11:22.449 | DEBUG | text2vec.sentence_model:init:80 - Use device: cpu Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s] Traceback (most recent call last): File "/Volumes/LZ_Storage/workspace/ai/ChatPDF/chatpdf.py", line 528, in m = ChatPDF( File "/Volumes/LZ_Storage/workspace/ai/ChatPDF/chatpdf.py", line 179, in init self.gen_model, self.tokenizer = self._init_gen_model( File "/Volumes/LZ_Storage/workspace/ai/ChatPDF/chatpdf.py", line 221, in _init_gen_model model = model_class.from_pretrained( File "/Volumes/LZ_Storage/workspace/ai/ChatPDF/.venv/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 561, in from_pretrained return model_class.from_pretrained( File "/Volumes/LZ_Storage/workspace/ai/ChatPDF/.venv/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3502, in from_pretrained ) = cls._load_pretrained_model( File "/Volumes/LZ_Storage/workspace/ai/ChatPDF/.venv/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3926, in _load_pretrained_model new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model( File "/Volumes/LZ_Storage/workspace/ai/ChatPDF/.venv/lib/python3.10/site-packages/transformers/modeling_utils.py", line 805, in _load_state_dict_into_meta_model set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs) File "/Volumes/LZ_Storage/workspace/ai/ChatPDF/.venv/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 387, in set_module_tensor_to_device new_value = value.to(device) TypeError: BFloat16 is not supported on MPS
设备型号: mac mini m2 python版本: 3.10
The text was updated successfully, but these errors were encountered:
Update chatpdf.py
58c7185
#26
No branches or pull requests
(.venv) lizhong@lizhongdeMac-mini ChatPDF % PYTORCH_ENABLE_MPS_FALLBACK=1 CUDA_VISIBLE_DEVICES=0 python chatpdf.py --gen_model_type auto --gen_model_name 01-ai/Yi-6B-Chat --corpus_files sample.pdf
Namespace(sim_model_name='shibing624/text2vec-base-multilingual', gen_model_type='auto', gen_model_name='01-ai/Yi-6B-Chat', lora_model=None, rerank_model_name='', corpus_files='sample.pdf', device=None, int4=False, int8=False, chunk_size=220, chunk_overlap=0, num_expand_context_chunk=1)
2024-03-14 11:11:22.449 | DEBUG | text2vec.sentence_model:init:80 - Use device: cpu
Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/Volumes/LZ_Storage/workspace/ai/ChatPDF/chatpdf.py", line 528, in
m = ChatPDF(
File "/Volumes/LZ_Storage/workspace/ai/ChatPDF/chatpdf.py", line 179, in init
self.gen_model, self.tokenizer = self._init_gen_model(
File "/Volumes/LZ_Storage/workspace/ai/ChatPDF/chatpdf.py", line 221, in _init_gen_model
model = model_class.from_pretrained(
File "/Volumes/LZ_Storage/workspace/ai/ChatPDF/.venv/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 561, in from_pretrained
return model_class.from_pretrained(
File "/Volumes/LZ_Storage/workspace/ai/ChatPDF/.venv/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3502, in from_pretrained
) = cls._load_pretrained_model(
File "/Volumes/LZ_Storage/workspace/ai/ChatPDF/.venv/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3926, in _load_pretrained_model
new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
File "/Volumes/LZ_Storage/workspace/ai/ChatPDF/.venv/lib/python3.10/site-packages/transformers/modeling_utils.py", line 805, in _load_state_dict_into_meta_model
set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs)
File "/Volumes/LZ_Storage/workspace/ai/ChatPDF/.venv/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 387, in set_module_tensor_to_device
new_value = value.to(device)
TypeError: BFloat16 is not supported on MPS
设备型号: mac mini m2
python版本: 3.10
The text was updated successfully, but these errors were encountered: