Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

InternLM inference and training are problematic #78

Closed
June01 opened this issue Sep 26, 2023 · 2 comments
Closed

InternLM inference and training are problematic #78

June01 opened this issue Sep 26, 2023 · 2 comments

Comments

@June01
Copy link

June01 commented Sep 26, 2023

The following is my code to run the demo, with pretrained model, llama_config, tokenizer. However, the output of the network is chaos even with a simple Question "hello". Could you please look into it?

torchrun --nproc-per-node=1  demos/single_turn.py \
--llama_config /path/to/params.json --tokenizer_path /path/to/tokenizer.model \
--pretrained_path /path/to/alpaca_finetuned

image

@ChrisLiu6
Copy link
Collaborator

You are now loading internlm checkpoint to a llama model, Please try adding --llama_type internlm.

@June01
Copy link
Author

June01 commented Sep 27, 2023 via email

@June01 June01 closed this as completed Sep 27, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants