-
Notifications
You must be signed in to change notification settings - Fork 9.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initializing pipeline error #63
Comments
I got same error here. Loading checkpoint for MP=0 but world size is 1. checkpoints variable is also blank when I checked. Like [] Dunno what's happening. By the way, is MP the number of GPU's in a single node? |
I found the error to fix it you have to point the model and the tokenizer e.g |
Is there a file named tokenizer.model? I just got a params.json |
In the folder where you downloaded the model you have the model e.g 7B and also |
Well... this is odd. I got checklist.chk, consolidate.pth and params.json there nos tokenizer.model ;/ |
Found it! but yet the problem persist. |
Ok, problem solved! It was a path problem lol |
Does anyone know whats the problem with this?? |
Here is how I got things working,
|
OK, i also cannot get it to run with "torchrun", i get "failed to create process". Edit:
|
@felipehime what was the path issue? I'm getting the same error even pointing the command explicitly to the directories. |
i download the model without 7B file,why? |
specifically the path of tokenizer.model |
i also have the same problem, any solutions? Thank you! |
Closing as original author solved the issue. Feel free to open new issues with specific details on what you are facing for additional guidance. For future reference, check both llama and llama-recipes repos for getting started guides. |
do you got any solution yet ? |
Once i have completed the installation and try a test with test.py with the 8B model I had the following error:
The text was updated successfully, but these errors were encountered: