Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: Trying to use cuda when running on CPU #188

Closed
PromtEngineer opened this issue Jun 27, 2023 · 0 comments
Closed

Bug: Trying to use cuda when running on CPU #188

PromtEngineer opened this issue Jun 27, 2023 · 0 comments

Comments

@PromtEngineer
Copy link
Owner

The default model was set to TheBloke/WizardLM-7B-uncensored-GPTQ which causes issue when running on cpu.

  • Change the default to TheBloke/vicuna-7B-1.1-HF
  • When --device_type is cpu or mps, the model_basename will be set to None and will use LlamaForCausalLM. This is a temporary fix. Need a permanent fix for M1/M2.
PromtEngineer added a commit that referenced this issue Jun 27, 2023
The default model was set to TheBloke/WizardLM-7B-uncensored-GPTQ which causes issue when running on cpu.

Change the default to TheBloke/vicuna-7B-1.1-HF
When --device_type is cpu or mps, the model_basename will be set to None and will use LlamaForCausalLM. This is a temporary fix. Need a permanent fix for M1/M2.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant