Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

载入模型时报错,ValueError: .float() is not supported for quantized model. #296

Closed
3 tasks done
luoluodongdong opened this issue Sep 20, 2023 · 3 comments
Closed
3 tasks done
Labels

Comments

@luoluodongdong
Copy link

提交前必须检查以下项目

  • 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
  • 我已阅读项目文档FAQ章节并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案。
  • 第三方插件问题:例如llama.cppLangChaintext-generation-webui等,同时建议到对应的项目中查找解决方案。

问题类型

模型量化和部署

基础模型

Chinese-Alpaca-2 (7B/13B)

操作系统

macOS

详细描述问题

# 请在此处粘贴运行代码(请粘贴在本代码块里)

依赖情况(代码类问题务必提供)

# 请在此处粘贴依赖情况(请粘贴在本代码块里)

运行日志或截图

(py3.9_env) weidong@weidongdeMacBook-Pro-2 ~ % python /Volumes/Data/LLM/Chinese-LLaMA-Alpaca-2-main/scripts/openai_server_demo/openai_api_server.py --base_model /Volumes/Data/LLM/chinese-alpaca-2-7b-hf --only_cpu                     

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please run

python -m bitsandbytes

 and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
bin /Users/weidong/anaconda3/envs/py3.9_env/lib/python3.9/site-packages/bitsandbytes/libbitsandbytes_cpu.so
/Users/weidong/anaconda3/envs/py3.9_env/lib/python3.9/site-packages/bitsandbytes/cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
  warn("The installed version of bitsandbytes was compiled without GPU support. "
'NoneType' object has no attribute 'cadam32bit_grad_fp32'
CUDA SETUP: Loading binary /Users/weidong/anaconda3/envs/py3.9_env/lib/python3.9/site-packages/bitsandbytes/libbitsandbytes_cpu.so...
dlopen(/Users/weidong/anaconda3/envs/py3.9_env/lib/python3.9/site-packages/bitsandbytes/libbitsandbytes_cpu.so, 0x0006): tried: '/Users/weidong/anaconda3/envs/py3.9_env/lib/python3.9/site-packages/bitsandbytes/libbitsandbytes_cpu.so' (not a mach-o file), '/System/Volumes/Preboot/Cryptexes/OS/Users/weidong/anaconda3/envs/py3.9_env/lib/python3.9/site-packages/bitsandbytes/libbitsandbytes_cpu.so' (no such file), '/Users/weidong/anaconda3/envs/py3.9_env/lib/python3.9/site-packages/bitsandbytes/libbitsandbytes_cpu.so' (not a mach-o file)
Xformers is not installed correctly. If you want to use memory_efficient_attention use the following command to install Xformers
pip install xformers.
USE_MEM_EFF_ATTENTION:  False
STORE_KV_BEFORE_ROPE: False
Apply NTK scaling with ALPHA=1.0
The value of scaling factor will be read from model config file, or set to 1.
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████| 2/2 [00:06<00:00,  3.34s/it]
/Users/weidong/anaconda3/envs/py3.9_env/lib/python3.9/site-packages/transformers/generation/configuration_utils.py:362: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.9` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed.
  warnings.warn(
/Users/weidong/anaconda3/envs/py3.9_env/lib/python3.9/site-packages/transformers/generation/configuration_utils.py:367: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `0.6` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed.
  warnings.warn(
Vocab of the base model: 55296
Vocab of the tokenizer: 55296
Traceback (most recent call last):
  File "/Volumes/Data/LLM/Chinese-LLaMA-Alpaca-2-main/scripts/openai_server_demo/openai_api_server.py", line 105, in <module>
    model.float()
  File "/Users/weidong/anaconda3/envs/py3.9_env/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2068, in float
    raise ValueError(
ValueError: `.float()` is not supported for quantized model. Please use the model as it is, since the model has already been casted to the correct `dtype`.
@airaria
Copy link
Contributor

airaria commented Sep 20, 2023

我测试没有问题,是不是环境问题?测试环境是
transformers 4.31.0
bitsandbytes 0.41.0

@github-actions
Copy link

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration.

@github-actions github-actions bot added the stale label Sep 30, 2023
@github-actions
Copy link

github-actions bot commented Oct 5, 2023

Closing the issue, since no updates observed. Feel free to re-open if you need any further assistance.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Oct 5, 2023
@ymcui ymcui closed this as completed Oct 7, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants