New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AttributeError: module 'torch' has no attribute 'bucketize' #7
Comments
@loretoparisi What is your PyTorch version? This functionality is newly introduced in PyTorch 1.6, which is still not stable yet. |
@ming024 got it, I was using
Thank you, I will try this way. |
@ming024 I have fixed the error using
I'm using CPU. |
[UPDATE] if torch.cuda.is_available():
forward_transform = F.conv1d(
input_data.cuda(),
Variable(self.forward_basis, requires_grad=False).cuda(),
stride=self.hop_length,
padding=0).cpu()
else:
forward_transform = F.conv1d(
input_data,
Variable(self.forward_basis, requires_grad=False),
stride=self.hop_length,
padding=0).cpu() the problem is that it seems that the waveglov
and If I do in def get_melgan():
if torch.cuda.is_available():
melgan = torch.hub.load('seungwonpark/melgan', 'melgan')
else:
# nvidia's waveglow models explicitly don't work on cpu.
melgan = torch.hub.load('seungwonpark/melgan', 'melgan', map_location=torch.device('cpu'))
melgan.eval()
return melgan I now get
So an alternative approach to pass def get_melgan():
if torch.cuda.is_available():
melgan = torch.hub.load('seungwonpark/melgan', 'melgan')
melgan.eval()
else:
# nvidia's waveglow models explicitly don't work on cpu.
melgan = torch.hub.load('seungwonpark/melgan', 'melgan', pretrained=False)
checkpoint = torch.hub.load_state_dict_from_url('file:///root/.cache/torch/hub/checkpoints/nvidia_tacotron2_LJ11_epoch6400.pt ', map_location="cpu")
#state_dict = {key.replace("module.", ""): value for key, value in checkpoint["state_dict"].items()}
melgan.load_state_dict(checkpoint)
melgan.eval()
return melgan but it does not unwrap correctly the checkpoint's state dict. |
@loretoparisi it is because in the MelGAN implementation the weight normalization parameters are removed when the model is set to inference mode. You can see this colab for a simple solution. |
closed #7 |
I get the following error:
I'm running in CPU, and I had to modify
get_FastSpeech2
like this:to set
map_location
to cpu device.The text was updated successfully, but these errors were encountered: