You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
Beautiful work!
Installing dependencies on a new conda environment and running as instructed gives the error:
Loading base models...
Models loaded! Starting training...
torch.Size([1, 3, 1024, 1024])
Traceback (most recent call last):
File "train_colab.py", line 144, in
[sampled_src, sampled_dst], clip_loss = net(sample_z)
File "/home/user/miniconda3/envs/nada/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/user/dev/StyleGAN-nada/ZSSGAN/model/ZSSGAN.py", line 278, in forward
clip_loss = torch.sum(torch.stack([self.clip_model_weights[model_name] * self.clip_loss_models[model_name](frozen_img, self.source_class, trainable_img, self.target_class) for model_name in self.clip_model_weights.keys()]))
File "/home/user/dev/StyleGAN-nada/ZSSGAN/model/ZSSGAN.py", line 278, in
clip_loss = torch.sum(torch.stack([self.clip_model_weights[model_name] * self.clip_loss_models[model_name](frozen_img, self.source_class, trainable_img, self.target_class) for model_name in self.clip_model_weights.keys()]))
File "/home/user/miniconda3/envs/nada/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/user/dev/StyleGAN-nada/ZSSGAN/criteria/clip_loss.py", line 299, in forward
clip_loss += self.lambda_direction * self.clip_directional_loss(src_img, source_class, target_img, target_class)
File "/home/user/dev/StyleGAN-nada/ZSSGAN/criteria/clip_loss.py", line 181, in clip_directional_loss
src_encoding = self.get_image_features(src_img)
File "/home/user/dev/StyleGAN-nada/ZSSGAN/criteria/clip_loss.py", line 109, in get_image_features
image_features = self.encode_images(img)
File "/home/user/dev/StyleGAN-nada/ZSSGAN/criteria/clip_loss.py", line 80, in encode_images
images = self.preprocess(images).to(self.device)
File "/home/user/miniconda3/envs/nada/lib/python3.7/site-packages/torchvision/transforms/transforms.py", line 60, in call
img = t(img)
File "/home/user/miniconda3/envs/nada/lib/python3.7/site-packages/torchvision/transforms/transforms.py", line 163, in call
return F.normalize(tensor, self.mean, self.std, self.inplace)
File "/home/user/miniconda3/envs/nada/lib/python3.7/site-packages/torchvision/transforms/functional.py", line 201, in normalize
raise TypeError('tensor is not a torch image.')
TypeError: tensor is not a torch image.
The text was updated successfully, but these errors were encountered:
Thank you! Upgrading pytorch + torchvision did the trick. I suggest editing the Readme. I believe this line:
conda install --yes -c pytorch pytorch=1.7.1 torchvision cudatoolkit=10.2
Hi,
Beautiful work!
Installing dependencies on a new conda environment and running as instructed gives the error:
Loading base models...
Models loaded! Starting training...
torch.Size([1, 3, 1024, 1024])
Traceback (most recent call last):
File "train_colab.py", line 144, in
[sampled_src, sampled_dst], clip_loss = net(sample_z)
File "/home/user/miniconda3/envs/nada/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/user/dev/StyleGAN-nada/ZSSGAN/model/ZSSGAN.py", line 278, in forward
clip_loss = torch.sum(torch.stack([self.clip_model_weights[model_name] * self.clip_loss_models[model_name](frozen_img, self.source_class, trainable_img, self.target_class) for model_name in self.clip_model_weights.keys()]))
File "/home/user/dev/StyleGAN-nada/ZSSGAN/model/ZSSGAN.py", line 278, in
clip_loss = torch.sum(torch.stack([self.clip_model_weights[model_name] * self.clip_loss_models[model_name](frozen_img, self.source_class, trainable_img, self.target_class) for model_name in self.clip_model_weights.keys()]))
File "/home/user/miniconda3/envs/nada/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/user/dev/StyleGAN-nada/ZSSGAN/criteria/clip_loss.py", line 299, in forward
clip_loss += self.lambda_direction * self.clip_directional_loss(src_img, source_class, target_img, target_class)
File "/home/user/dev/StyleGAN-nada/ZSSGAN/criteria/clip_loss.py", line 181, in clip_directional_loss
src_encoding = self.get_image_features(src_img)
File "/home/user/dev/StyleGAN-nada/ZSSGAN/criteria/clip_loss.py", line 109, in get_image_features
image_features = self.encode_images(img)
File "/home/user/dev/StyleGAN-nada/ZSSGAN/criteria/clip_loss.py", line 80, in encode_images
images = self.preprocess(images).to(self.device)
File "/home/user/miniconda3/envs/nada/lib/python3.7/site-packages/torchvision/transforms/transforms.py", line 60, in call
img = t(img)
File "/home/user/miniconda3/envs/nada/lib/python3.7/site-packages/torchvision/transforms/transforms.py", line 163, in call
return F.normalize(tensor, self.mean, self.std, self.inplace)
File "/home/user/miniconda3/envs/nada/lib/python3.7/site-packages/torchvision/transforms/functional.py", line 201, in normalize
raise TypeError('tensor is not a torch image.')
TypeError: tensor is not a torch image.
The text was updated successfully, but these errors were encountered: