Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't see logger output #4624

Closed
2 of 4 tasks
parmarsuraj99 opened this issue May 27, 2020 · 5 comments
Closed
2 of 4 tasks

Can't see logger output #4624

parmarsuraj99 opened this issue May 27, 2020 · 5 comments

Comments

@parmarsuraj99
Copy link
Contributor

parmarsuraj99 commented May 27, 2020

🐛 Bug

Information

Model I am using (Bert, XLNet ...): RoBERTa

Language I am using the model on (English, Chinese ...): Sanskrit

The problem arises when using:

  • the official example scripts: (give details below)
  • my own modified scripts: (give details below)

The tasks I am working on is:

  • an official GLUE/SQUaD task: (give the name)
  • my own task or dataset: (give details below)

To reproduce

Steps to reproduce the behavior:

Can't see logger output showing model config and other parameters in Trainer that were printed in training_scripts.

from transformers import Trainer, TrainingArguments

training_args = TrainingArguments(
    output_dir="./model_path",
    overwrite_output_dir=True,
    num_train_epochs=1,
    per_gpu_train_batch_size=128,
    per_gpu_eval_batch_size =256,
    save_steps=1_000,
    save_total_limit=2,
    logging_first_step = True,
    do_train=True,
    do_eval = True,
    evaluate_during_training=True,
    logging_steps = 1000
)

trainer = Trainer(
    model=model,
    args=training_args,
    data_collator=data_collator,
    train_dataset=train_dataset,
    eval_dataset = valid_dataset,
    prediction_loss_only=True,
)
%%time
trainer.train(model_path="./model_path")

Is it It is overriden by tqdm?
but I can still see Using deprecated --per_gpu_train_batch_sizeargument which will be removed in a future version. Using--per_device_train_batch_size is preferred.

Environment info

  • transformers version: 2.10.0
  • Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
  • Python version: 3.6.9
  • PyTorch version (GPU?): 1.6.0a0+916084d (False)
  • Tensorflow version (GPU?): not installed (NA)
  • Using GPU in script?: TPU
  • Using distributed or parallel set-up in script?: No
@LysandreJik
Copy link
Member

Hi, have you tried setting the logging level to INFO? You can do so with the following lines:

import logging

logging.basicConfig(level=logging.INFO)

@parmarsuraj99
Copy link
Contributor Author

It worked! Thanks

@jawadSajid
Copy link

Hey, this doesn't log the training progress by trainer.train() into a log file. I want to keep appending the training progress to my log file but all I get are the prints and the parameters info at the end of trainer.train(). What would be a way around to achieve this? @parmarsuraj99 @LysandreJik

@iamlockelightning
Copy link
Contributor

iamlockelightning commented Aug 26, 2021

+1

same request. @parmarsuraj99 @LysandreJik

@iamlockelightning
Copy link
Contributor

iamlockelightning commented Oct 19, 2021

Share a solution, not so elegant but works.

I define a new Callback function, which logging the logs using the outside logger. And then pass it to the trainer.

class LoggerLogCallback(transformers.TrainerCallback):
    def on_log(self, args, state, control, logs=None, **kwargs):
        control.should_log = False
        _ = logs.pop("total_flos", None)
        if state.is_local_process_zero:
            logger.info(logs) # using your custom logger

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants