New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDA error, possibly related to max_length #1
Comments
Hi @alexvaca0. If you want to use RoBERTuito, you don't really need to install this package -- just install Regarding your stack trace, could you provide an example (if possible, on a Colab notebook) of what you are running? I think that there might be an issue with the max length of the tokenizer. |
Actually what I meant is that I installed pysentimiento: Providing an example of what I'm running is quite hard because I'm using my own Benchmarke which is a huge class with much functionality, let me first try to check if it is the max length and if it isn't I can code a whole example for showing it to you. What is the supposed max length of robertuito? @finiteautomata |
Thanks a lot for the suggestion @finiteautomata , I just checked your config files in this repo and corrected that on my code to hardcode 128 max length, and it's all solved. Thank you very much! :) |
Great! |
I think the library should be installed isolated from transformers, because if one has another version of transformers with custom models or whatever, this breaks the environment, unnecessarily.
But the important point here is that it's not possible to train robertuito:
I have tried many other models in spanish and this doesn't happen, therefore it's directly related to your model, and not the model architecture (coming from transformers).
The text was updated successfully, but these errors were encountered: