-
-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trying to load the gpt4chan model #1620
Comments
Same error on MacOS M1 Pro
|
Solution:download GPT-2's tokenizer.json and place it in your GPT4Chan directory, or simply open a cmd in your webui directory and run:
(make sure you change gpt4chan_model_float16 to the name it has in your models folder) ps: if you encounter issues with missing tokens or something, maybe this one will work better, as that's the tokenizer for GPT-J-6B |
Thanks for the reply, but still having issues (slightly different now). I had put the
|
I got this to work by doing this:
We can close this. |
Describe the bug
Followed the instructions in the
README.md
to set up the gpt4chan files:When I load the model, here's the error message I see:
Is there an existing issue for this?
Reproduction
Followed the GPT-4Chan related instructions in the README.
Direct downloaded the 16-bit files.
After downloading the model, follow these steps:
Place the files under models/gpt4chan_model_float16 or models/gpt4chan_model.
Place GPT-J 6B's config.json file in that same folder: config.json.
Download GPT-J 6B's tokenizer files (they will be automatically detected when you attempt to load GPT-4chan):
Screenshot
No response
Logs
System Info
The text was updated successfully, but these errors were encountered: