Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

SummaryWriter.add_graph borks with simple example #38614

Open
sallamander opened this issue May 16, 2020 · 0 comments
Open

SummaryWriter.add_graph borks with simple example #38614

sallamander opened this issue May 16, 2020 · 0 comments
Labels
oncall: visualization Related to visualization in PyTorch, e.g., tensorboard

Comments

@sallamander
Copy link

sallamander commented May 16, 2020

馃悰 Bug

Something funky is happening with SummaryWriter.add_graph. I was actually trying to dumb down some code from a private repo into a toy example to reproduce our issue, and ran into separate issue where a RuntimeError was thrown and the message told me to report it as a bug 馃し . Here's a picture of the stacktrace:

image

To Reproduce

Code snippet:

import numpy as np
import torch
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter
from torchvision.models.segmentation import deeplabv3_resnet50


class DeepLabResNet(torch.nn.Module):
    """DeepLab V3 network with ResNet backbone"""

    def __init__(self):
        """Init"""

        super(DeepLabResNet, self).__init__()

        self.deep_lab = deeplabv3_resnet50(
            pretrained=False, progress=True, num_classes=2
        )
        self.softmax = torch.nn.Sigmoid()

    # pylint: disable = arguments-differ
    def forward(self, x):
        """Network forward pass

        :param torch.Tensor x: tensor of shape (N, C1, H1, W1)
        :return torch.Tensor x: tensor of shape (N, C2, H2, W2)
        """

        x = self.deep_lab(x)
        output = x['out']
        return self.softmax(output)


class MyToyDataset(torch.utils.data.Dataset):
    """Toy dataset to train with"""
    
    def __getitem__(self, idx):
        """Return a mock input / target pair"""
        
        mock_input = torch.Tensor(np.ones((3, 256, 256)))
        mock_target = torch.Tensor(np.ones((3, 256, 256)))
        return mock_input, mock_target
    
    def __len__(self):
        """Return the length of the dataset"""
        
        return 100

network = DeepLabResNet()
gpus = [0, 1]
network = network.to(torch.device(gpus[0]))
network = torch.nn.DataParallel(network, device_ids=gpus)

# optimizer = torch.optim.Adam(network.parameters())

dataset = MyToyDataset()
data_loader = DataLoader(dataset=dataset)
input_target_pair = iter(data_loader).next()

writer = SummaryWriter('~/tensorboard_stuffs')
writer.add_graph(network, input_target_pair[0])

The error occurs with writer.add_graph - if that line is commented out, everything is fine.

Expected behavior

No error should be thrown.

Environment

image

Additional context

I was actually trying to recreate an issue I was having where if I passed multiple GPUs into DataParallel, writer.add_graph would throw the error Cannot insert a Tensor that requires grad as a constant. Consider making it a parameter or input, or detaching the gradient, but if I only passed in one GPU, it would not throw that error. My original error when passing in multiple GPUs seems related to #30459, #28206, and #24904, although I'm not sure if the error in from this code snippet is related.

cc @suo

@jamesr66a jamesr66a added the oncall: jit Add this issue/PR to JIT oncall triage queue label May 17, 2020
@suo suo added the oncall: visualization Related to visualization in PyTorch, e.g., tensorboard label May 19, 2020
@suo suo removed the oncall: jit Add this issue/PR to JIT oncall triage queue label Jun 4, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
oncall: visualization Related to visualization in PyTorch, e.g., tensorboard
Projects
None yet
Development

No branches or pull requests

3 participants