-
Notifications
You must be signed in to change notification settings - Fork 25.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Missing [existing] default config for accelerator in trainer module #29993
Comments
Thanks, I'll look into this. It's odd it's |
@b5y your issue is the |
Well, I know that there must be instance, not a type. But the main reason why I left it as it is now is because I wanted to see the result of the last cell in the notebook (the one with I knew how to fix it inside |
I'm not sure what's happening in your code, as I'm unable to reproduce this issue. Please provide a full reproducer with your exact current code, for us to help. One thing I notice, why are you not instantiating the training_args = TrainingArguments
training_args.per_device_train_batch_size = 256
training_args.train_group_size = 15
training_args.negatives_cross_device = negatives_cross_device
# select a appropriate for your model. Recommend 1e-5/2e-5/3e-5 for large/base/small-scale.
training_args.learning_rate = 3e-5
training_args.temperature = temperature
# instruction for query, which will be added to each query. You also can set it "" to add nothing to query.
training_args.query_instruction_for_retrieval = ""
# use passages in the same batch as negatives. Default value is True.
training_args.use_inbatch_neg = use_inbatch_neg This does not seem right whatsoever, and one should not be modifying values like this after the fact. |
Sorry, that was a typo in instantiation. And I do agree with modifying values. Setting up an instantiation like this training_args = TrainingArguments(
output_dir = "output_dir",
per_device_train_batch_size=256,
# train_group_size=15,
learning_rate = 3e-5,
temperature = temperature,
# query_instruction_for_retrieval = "",
use_inbatch_neg = use_inbatch_neg
) Fixed my problem. Anyways, the error with |
System Info
transformers
version: 4.39.3Who can help?
@muellerzr and @pacman100
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
Expected behavior
Expected behavior should be fetching default accelerate config if it's not provided.
I've been trying to reproduce codebase from FlagEmbedding project in jupyter notebook.
It seems there is some problem with
accelerator_config
. TheRetrieverTrainingArguments
class is modified and looks like this:And I am getting the following error in jupyter notebook:
Tested with
transformers
versions arev4.39.2
andv4.39.3
.UPDATED: Even if I put full path of
default_config.yaml
, error stays the same and 'NoneType' changes to 'str'.The text was updated successfully, but these errors were encountered: