Skip to content

[FEATURE] Logging frequency for TensorBoard #156

@S-aiueo32

Description

@S-aiueo32

Is your feature request related to a problem? Please describe.
TensorBoard is a great tool for visualization but the size of event files rapidly grows bigger when we log images.
To prevent this problem, we need to set logging frequencies.
As the best of my knowledge, in pytorch-lightning, we can manually set them like:

class CoolModule(pl.LightningModule):
    def __init__(self, args):
        self.log_feq = args.log_freq

        self.model = ...

    def training_step(self, data_batch, batch_nb):
        input = data_batch['input']
        output = self.forward(input)

        if self.global_step % self.log_freq == 0:
            self.experiment.add_image('output_image', output, self.global_step)

This is an easy way, however, I think it is clearer to control frequencies by Trainer.

Describe the solution you'd like
Add an option for controlling the frequencies to Trainer.

trainer = Trainer(model, tb_log_freq=foo)

To enable this functionality, training_step should return image tensors like:

def training_step(self, data_batch, batch_nb):
    input = data_batch['input']
    output = self.forward(input)
    loss = ...
    return {'loss', loss, 'prog': {'loss': loss}, 'image': {'output': output}}

Metadata

Metadata

Assignees

No one assigned

    Labels

    featureIs an improvement or enhancementhelp wantedOpen to be worked on

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions