Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add_text not working unless closing writer #345

Open
JuanFMontesinos opened this Issue Jan 24, 2019 · 6 comments

Comments

Projects
None yet
5 participants
@JuanFMontesinos
Copy link

JuanFMontesinos commented Jan 24, 2019

Hi
I've noticed (also after reading other issues) that add_text does not display it until you closes the writer.
tensorboardX 1.6
tensorboard 1.8
Is there any reason why?

@dumitrescustefan

This comment has been minimized.

Copy link

dumitrescustefan commented Feb 5, 2019

I have the same problem. Also, writing several times (with incremental time steps) only produces the first step on screen (after closing the SummaryWriter and refreshing tensorboard).

@BPiepmatz

This comment has been minimized.

Copy link

BPiepmatz commented Feb 15, 2019

For me it is also not showing even though with writer.close()
this code produces the image below:
writer.add_text('lr', 'learning rate', lr) writer.add_text('wd', 'weight decay', wd)

screen shot 2019-02-15 at 17 11 03

tensorboardX 1.6
pytorch 0.4
tensorboard 1.12

@lanpa

This comment has been minimized.

Copy link
Owner

lanpa commented Feb 16, 2019

Hi all, I need some code to reproduce the issue. @BPiepmatz The third argument accepts integer (the iteration number). For your case, add_text('learning rate', str(lr), 1) makes more sense.

@BPiepmatz

This comment has been minimized.

Copy link

BPiepmatz commented Feb 16, 2019

@lanpa Oh I see, same issue still appearing. This is the whole code ` learning_rates = [1e-2, 1e-3, 1e-4, 1e-5, 1e-6, 1e-7, 1e-8]
wd = 0.0002
opt_names = ['RMSprop']
nAveGrad = 5 # Average the gradient every nAveGrad iterations
nEpochs = 200 * nAveGrad # Number of epochs for train
snapshot = nEpochs # Store a model every snapshot epochs
gpu_id = 0
device = torch.device("cuda:" + str(gpu_id) if torch.cuda.is_available() else "cpu")

for lr in learning_rates:
    for opt_name in opt_names:
        model_file_name = datetime.now(gettz('Europe/Berlin')).strftime(
            '%b%d_%H-%M-%S') + '_' + socket.gethostname() + '-' + str(
            lr) + '-' + str(nEpochs) + '-' + str(wd) + '-' + str(lr) + '-' + str(opt_name)
        log_dir = os.path.join('tensorboard_log/runs', model_file_name)

        # Logging into Tensorboard
        writer = SummaryWriter(log_dir=log_dir)

        # Load Network
        print('Loading model...')
        net = CustomNet(pretrained=False)
        optimizer = choose_optimizer(opt_name, net)
        writer.add_text('lr', str(lr), 1)
        writer.add_text('wd', str(wd), 2)
        # writer.add_text('nAveGrad', nAveGrad, 2)
        # writer.add_text('Optimizer', opt_name, 3)

        x = torch.randn(1, 3, 480, 854)
        y = torch.randn(1, 3, 480, 854)
        writer.add_graph(net, (x, y))
        net.to(device)
        writer.close()
        print('Model loaded')`

Does this help?

Best

@lanpa

This comment has been minimized.

Copy link
Owner

lanpa commented Mar 3, 2019

@BPiepmatz It should be writer.add_text('lr', str(lr), n_iter)
I would write:

for n_iter, lr in enumerate(learning_rates):
  writer.add_text('lr', str(lr), n_iter)
@ZizhouJia

This comment has been minimized.

Copy link

ZizhouJia commented Mar 20, 2019

I also have the same problem
tensorboardX 1.6
pytorch 1.0
tensorboard 1.12

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.