-
Notifications
You must be signed in to change notification settings - Fork 27k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't see logger output #4624
Comments
Hi, have you tried setting the logging level to import logging
logging.basicConfig(level=logging.INFO) |
It worked! Thanks |
Hey, this doesn't log the training progress by trainer.train() into a log file. I want to keep appending the training progress to my log file but all I get are the prints and the parameters info at the end of trainer.train(). What would be a way around to achieve this? @parmarsuraj99 @LysandreJik |
+1 same request. @parmarsuraj99 @LysandreJik |
Share a solution, not so elegant but works. I define a new class LoggerLogCallback(transformers.TrainerCallback):
def on_log(self, args, state, control, logs=None, **kwargs):
control.should_log = False
_ = logs.pop("total_flos", None)
if state.is_local_process_zero:
logger.info(logs) # using your custom logger |
🐛 Bug
Information
Model I am using (Bert, XLNet ...): RoBERTa
Language I am using the model on (English, Chinese ...): Sanskrit
The problem arises when using:
The tasks I am working on is:
To reproduce
Steps to reproduce the behavior:
Can't see logger output showing model config and other parameters in Trainer that were printed in training_scripts.
Is it It is overriden by tqdm?
but I can still see
Using deprecated
--per_gpu_train_batch_sizeargument which will be removed in a future version. Using
--per_device_train_batch_sizeis preferred.
Environment info
transformers
version: 2.10.0The text was updated successfully, but these errors were encountered: