Skip to content
This repository was archived by the owner on Oct 11, 2023. It is now read-only.
This repository was archived by the owner on Oct 11, 2023. It is now read-only.

RuntimeError when training #47

@matheushent

Description

@matheushent

I just run the code for training and I got the traceback:

Traceback (most recent call last):
  File "train.py", line 64, in <module>
    model.optimize_parameters()
  File "C:\Users\mathe\projects\SwapNet\models\warp_model.py", line 177, in optimize_parameters
    super().optimize_parameters()
  File "C:\Users\mathe\projects\SwapNet\models\base_gan.py", line 198, in optimize_parameters
    self.backward_D()
  File "C:\Users\mathe\projects\SwapNet\models\warp_model.py", line 117, in backward_D
    self.loss_D_fake = self.criterion_GAN(pred_fake, False)
  File "C:\Users\mathe\projects\SwapNet\modules\loss.py", line 119, in __call__
    target_tensor = self.get_target_tensor(prediction, target_is_real)
  File "C:\Users\mathe\projects\SwapNet\modules\loss.py", line 101, in get_target_tensor
    target_tensor = GANLoss.rand_between(low, high).to(
  File "C:\Users\mathe\projects\SwapNet\modules\loss.py", line 75, in rand_between
    return rand_func(1) * (high - low) + low
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

It seems some parameters are not allocated on GPU. Any ideia of solving it?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions