Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: 在WSL中运行环境,加载LLaMA+No LoRA,对话 Tokenizer class LLaMATokenizer does not exist or is not currently imported. #655

Closed
2 tasks done
yinguohang opened this issue Apr 14, 2023 · 2 comments
Labels
bug Something isn't working

Comments

@yinguohang
Copy link

yinguohang commented Apr 14, 2023

这个bug是否已存在现有issue了?

  • 我确认没有已有issue,且已阅读常见问题

错误表现

报错
Tokenizer class LLaMATokenizer does not exist or is not currently imported.

复现操作

  1. 在WSL中完成部署
  2. 在网页中切换到llama-7b-hf + No LoRA
  3. 在对话框中输入你好

错误日志

2023-04-14 10:55:12,728 [INFO] [models.py:524] 现在请为 llama-7b-hf 选择LoRA模型
2023-04-14 10:55:12,731 [INFO] [models.py:543] 现在请为 llama-7b-hf 选择LoRA模型
2023-04-14 10:55:14,844 [INFO] [models.py:531] 正在加载LLaMA模型: llama-7b-hf + No LoRA
2023-04-14 10:55:16,772 [WARNING] [hf_decoder_model.py:192] llama does not support RAM optimized load. Automatically use original load instead.
Traceback (most recent call last):
  File "/home/ygh/.local/lib/python3.10/site-packages/gradio/routes.py", line 401, in run_predict
    output = await app.get_blocks().process_api(
  File "/home/ygh/.local/lib/python3.10/site-packages/gradio/blocks.py", line 1302, in process_api
    result = await self.call_function(
  File "/home/ygh/.local/lib/python3.10/site-packages/gradio/blocks.py", line 1025, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/home/ygh/.local/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "/home/ygh/.local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "/home/ygh/.local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "/mnt/d/ChuanhuChatGPT/modules/utils.py", line 43, in billing_info
    return current_model.billing_info()
AttributeError: 'NoneType' object has no attribute 'billing_info'
Traceback (most recent call last):
  File "/home/ygh/.local/lib/python3.10/site-packages/gradio/routes.py", line 401, in run_predict
    output = await app.get_blocks().process_api(
  File "/home/ygh/.local/lib/python3.10/site-packages/gradio/blocks.py", line 1302, in process_api
    result = await self.call_function(
  File "/home/ygh/.local/lib/python3.10/site-packages/gradio/blocks.py", line 1039, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/home/ygh/.local/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "/home/ygh/.local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "/home/ygh/.local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "/home/ygh/.local/lib/python3.10/site-packages/gradio/utils.py", line 491, in async_iteration
    return next(iterator)
  File "/mnt/d/ChuanhuChatGPT/modules/utils.py", line 38, in predict
    iter = current_model.predict(*args)
AttributeError: 'NoneType' object has no attribute 'predict'
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████| 33/33 [00:09<00:00,  3.51it/s]
2023-04-14 10:55:54,471 [ERROR] [models.py:545] Tokenizer class LLaMATokenizer does not exist or is not currently imported.

运行环境

  • OS: Windows WSL Ubuntu-22.04
  • Browser: Chrome
  • Gradio version: 3.26.0
  • Python version: 3.10.6

帮助解决

  • 我愿意协助解决!

补充说明

感觉可能和这个issue有关,尝试了pip install git+https://github.com/huggingface/transformers,但是问题仍然存在

@yinguohang yinguohang added the bug Something isn't working label Apr 14, 2023
@yinguohang yinguohang changed the title [Bug]: 在WSL中运行环境,加载LLaMA+No LoRA,对话 [Bug]: 在WSL中运行环境,加载LLaMA+No LoRA,对话 Tokenizer class LLaMATokenizer does not exist or is not currently imported. Apr 14, 2023
@GaiZhenbiao
Copy link
Owner

GaiZhenbiao commented Apr 14, 2023

请查看transformers的这个issue:huggingface/transformers#22222 (comment)
host llama模型的那个仓库的配置文件有点问题……

@yinguohang
Copy link
Author

yinguohang commented Apr 14, 2023

搞定了!
手动修改~/.cache/huggingface/hub/models--decapoda-research--llama-7b-hf/snapshots/5f98eefcc80e437ef68d457ad7bf167c2c6/tokenizer_config.json中的 LLaMATokenizer 到 LlamaTokenizer

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants