-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RecursionError: maximum recursion depth exceeded #442
Comments
Thanks for raising this! I have a suspicion this might be fixable by setting the environment variable TOKENIZERS_PARALLELISM=false. Do you have a model + task combination / command that can replicate this consistently? |
Yup, I was doing using this adapter gpt4all-lora on llama7b, running arc_easy, arc_challenge (acc), piqa (acc), sciq, mnli and truthful_qa_mc
If relevant, it ran perfectly fine but when it came time for the results to show up, it crashes with the error message in the issue |
Thanks, I’ll give this a try in a minute! This does sound like an interaction between the bootstrap stderr multiprocessing and tokenizers in this case. |
Since it seems you've been able to get this running--what's the recommended fix for this LLaMA upload? @philwee |
You can either changing the transformers package you're on via one of these two pip install git+https://github.com/mbehm/transformers (old one where it worked) Please let me know if this helps! :) |
Closing this issue as it seems to be a bug in the HF library that has now been fixed. Anyone encountering this issue should make sure they’ve updated to the latest version of |
Do you have a link for the bug in transformers that raises + fixes this? The bug being raised in this issue is not the |
Oh you’re right, I misread the end of the convo. The issue you’re having is that it’s |
Had the same issue with Llama models. The problem stems from tokenizer initialization. |
@upunaprosk if correcting the tokenizer solves the problem, it seems like this issue should be opened on the HF transformers repo instead of this one. We are loading the model the way we are told to, it’s just that the transformers library doesn’t know how to load the model. @philwee @haileyschoelkopf if one of you can verify that this patch solves the problem, I’m happy to mark this as closed and open a corresponding issue on the transformers repo. |
.local/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:1142 in │
│ unk_token_id │
│ │
│ 1139 │ │ """ │
│ 1140 │ │ if self._unk_token is None: │
│ 1141 │ │ │ return None │
│ ❱ 1142 │ │ return self.convert_tokens_to_ids(self.unk_token) │
│ 1143 │ │
│ 1144 │ @Property │
│ 1145 │ def sep_token_id(self) -> Optional[int]: │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RecursionError: maximum recursion depth exceeded
Weird bug that happens when using hf-causal-experiment with model and peft
The text was updated successfully, but these errors were encountered: