Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fastchat 里面的baichuan config还能用吗? #62

Open
2533245542 opened this issue Sep 8, 2023 · 3 comments
Open

fastchat 里面的baichuan config还能用吗? #62

2533245542 opened this issue Sep 8, 2023 · 3 comments

Comments

@2533245542
Copy link

2533245542 commented Sep 8, 2023

现在是这个 (https://github.com/lm-sys/FastChat/blob/56744d1d947ad7cc94763e911529756b17139505/fastchat/conversation.py#L782)

register_conv_template(
    Conversation(
        name="baichuan-chat",
        roles=("<reserved_102>", "<reserved_103>"),
        sep_style=SeparatorStyle.NO_COLON_SINGLE,
        sep="",
        stop_token_ids=[],
    )
)

但是我看baichuan2里的roles应该改成下面这样?

        roles=("<reserved_106>", "<reserved_107>")
>>> model.generation_config.user_token_id
195
>>> model.generation_config.assistant_token_id
196
>>> tokenizer.decode([195])
'<reserved_106>'
>>> tokenizer.decode([196])
'<reserved_107>'
@blankxyz
Copy link

blankxyz commented Oct 9, 2023

不好用呀

@yuege613
Copy link

root@58c8455c9d58:/home/model_hub# CUDA_VISIBLE_DEVICES=1,2 python3.9 -m fastchat.serve.cli --model-path Baichuan2-13B-Chat-V1 --num-gpus 2
Xformers is not installed correctly. If you want to use memory_efficient_attention to accelerate training use the following command to install Xformers
pip install xformers.
You are using an old version of the checkpointing format that is deprecated (We will also silently ignore gradient_checkpointing_kwargs in case you passed it).Please update to the new format on your modeling file. To use the new format, you need to completely remove the definition of the method _set_gradient_checkpointing in your model.
Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s]/usr/local/lib/python3.9/dist-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.get(instance, owner)()
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████| 3/3 [00:12<00:00, 4.14s/it]
<reserved_106>: hallo
<reserved_107>: Traceback (most recent call last):
File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.9/dist-packages/fastchat/serve/cli.py", line 304, in
main(args)
File "/usr/local/lib/python3.9/dist-packages/fastchat/serve/cli.py", line 227, in main
chat_loop(
File "/usr/local/lib/python3.9/dist-packages/fastchat/serve/inference.py", line 532, in chat_loop
outputs = chatio.stream_output(output_stream)
File "/usr/local/lib/python3.9/dist-packages/fastchat/serve/cli.py", line 63, in stream_output
for outputs in output_stream:
File "/usr/local/lib/python3.9/dist-packages/torch/utils/_contextlib.py", line 56, in generator_context
response = gen.send(request)
File "/usr/local/lib/python3.9/dist-packages/fastchat/serve/inference.py", line 190, in generate_stream
indices = torch.multinomial(probs, num_samples=2)
RuntimeError: probability tensor contains either inf, nan or element < 0

为啥会报这个错呀

@yuege613
Copy link

root@58c8455c9d58:/home/model_hub# CUDA_VISIBLE_DEVICES=1,2 python3.9 -m fastchat.serve.cli --model-path Baichuan2-13B-Chat-V1 --num-gpus 2 Xformers is not installed correctly. If you want to use memory_efficient_attention to accelerate training use the following command to install Xformers pip install xformers. You are using an old version of the checkpointing format that is deprecated (We will also silently ignore gradient_checkpointing_kwargs in case you passed it).Please update to the new format on your modeling file. To use the new format, you need to completely remove the definition of the method _set_gradient_checkpointing in your model. Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s]/usr/local/lib/python3.9/dist-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() return self.fget.get(instance, owner)() Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████| 3/3 [00:12<00:00, 4.14s/it] <reserved_106>: hallo <reserved_107>: Traceback (most recent call last): File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.9/dist-packages/fastchat/serve/cli.py", line 304, in main(args) File "/usr/local/lib/python3.9/dist-packages/fastchat/serve/cli.py", line 227, in main chat_loop( File "/usr/local/lib/python3.9/dist-packages/fastchat/serve/inference.py", line 532, in chat_loop outputs = chatio.stream_output(output_stream) File "/usr/local/lib/python3.9/dist-packages/fastchat/serve/cli.py", line 63, in stream_output for outputs in output_stream: File "/usr/local/lib/python3.9/dist-packages/torch/utils/_contextlib.py", line 56, in generator_context response = gen.send(request) File "/usr/local/lib/python3.9/dist-packages/fastchat/serve/inference.py", line 190, in generate_stream indices = torch.multinomial(probs, num_samples=2) RuntimeError: probability tensor contains either inf, nan or element < 0

为啥会报这个错呀

这个是baichuan2-v1.0版本

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants