Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: module 'torch' has no attribute 'bucketize' #7

Closed
loretoparisi opened this issue Jul 23, 2020 · 6 comments
Closed

AttributeError: module 'torch' has no attribute 'bucketize' #7

loretoparisi opened this issue Jul 23, 2020 · 6 comments

Comments

@loretoparisi
Copy link

I get the following error:

root@75adae8f35d1:/app# python3 synthesize.py --step 300000
|{DH AH0 N EY1 SH AH0 N Z T UH1 R IH2 Z AH0 M M IH1 N AH0 S T ER0 HH AE1 Z AO1 L S OW0 EH0 N K ER1 IH0 JH D AO2 S T R EY1 L Y AH0 N Z T UW1 T EY1 K DH EH1 R HH AA1 L AH0 D EY2 Z W IH0 DH IH1 N DH AH0 K AH1 N T R IY0 DH IH1 S Y IH1 R} |
Traceback (most recent call last):
  File "synthesize.py", line 94, in <module>
    synthesize(model, text, sentence, prefix='step_{}'.format(args.step))
  File "synthesize.py", line 48, in synthesize
    mel, mel_postnet, duration_output, f0_output, energy_output = model(text, src_pos)
  File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 143, in forward
    return self.module(*inputs, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/app/fastspeech2.py", line 33, in forward
    encoder_output, d_target, p_target, e_target, max_length)
  File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/app/modules.py", line 47, in forward
    pitch_embedding = self.pitch_embedding(torch.bucketize(pitch_prediction, self.pitch_bins))
AttributeError: module 'torch' has no attribute 'bucketize'

I'm running in CPU, and I had to modify get_FastSpeech2 like this:

def get_FastSpeech2(num):
    checkpoint_path = os.path.join(hp.checkpoint_path, "checkpoint_{}.pth.tar".format(num))
    model = nn.DataParallel(FastSpeech2())
    if torch.cuda.is_available():
        model.load_state_dict(torch.load(checkpoint_path)['model'])
    else:
        model.load_state_dict(torch.load(checkpoint_path, map_location=torch.device('cpu'))['model'])
    model.requires_grad = False
    model.eval()
    return model

to set map_location to cpu device.

@ming024
Copy link
Owner

ming024 commented Jul 24, 2020

@loretoparisi What is your PyTorch version? This functionality is newly introduced in PyTorch 1.6, which is still not stable yet.

@loretoparisi
Copy link
Author

loretoparisi commented Jul 24, 2020

@ming024 got it, I was using 1.5.0 but someone told me I have to install

RUN pip install --pre torch==1.6.0.dev20200428 torchvision -f https://download.pytorch.org/whl/nightly/cu101/torch_nightly.html

Thank you, I will try this way.

@loretoparisi
Copy link
Author

@ming024 I have fixed the error using 1.6.0dev, but now I'm getting

Traceback (most recent call last):
  File "synthesize.py", line 94, in <module>
    synthesize(model, text, sentence, prefix='step_{}'.format(args.step))
  File "synthesize.py", line 61, in synthesize
    Audio.tools.inv_mel_spec(mel_postnet, os.path.join(hp.test_path, '{}_griffin_lim_{}.wav'.format(prefix, sentence)))
  File "/app/audio/tools.py", line 63, in inv_mel_spec
    spec_from_mel[:, :, :-1]), _stft.stft_fn, griffin_iters)
  File "/app/audio/audio_processing.py", line 74, in griffin_lim
    _, angles = stft_fn.transform(signal)
  File "/app/audio/stft.py", line 66, in transform
    input_data.cuda(),
  File "/usr/local/lib/python3.7/site-packages/torch/cuda/__init__.py", line 150, in _lazy_init
    _check_driver()
  File "/usr/local/lib/python3.7/site-packages/torch/cuda/__init__.py", line 54, in _check_driver
    http://www.nvidia.com/Download/index.aspx""")
AssertionError: 
Found no NVIDIA driver on your system. Please check that you
have an NVIDIA GPU and installed a driver from
http://www.nvidia.com/Download/index.aspx

I'm using CPU.

@loretoparisi
Copy link
Author

loretoparisi commented Jul 24, 2020

[UPDATE]
Ok I have fixed the previous one in sftp.py doing in def transform(self, input_data): like:

if torch.cuda.is_available():
            forward_transform = F.conv1d(
                input_data.cuda(),
                Variable(self.forward_basis, requires_grad=False).cuda(),
                stride=self.hop_length,
                padding=0).cpu()
        else:
            forward_transform = F.conv1d(
                input_data,
                Variable(self.forward_basis, requires_grad=False),
                stride=self.hop_length,
                padding=0).cpu()

the problem is that it seems that the waveglov

root@b832c651a4bc:/app# python synthesize.py --step 300000
|{DH AH0 N EY1 SH AH0 N Z T UH1 R IH2 Z AH0 M M IH1 N AH0 S T ER0 HH AE1 Z AO1 L S OW0 EH0 N K ER1 IH0 JH D AO2 S T R EY1 L Y AH0 N Z T UW1 T EY1 K DH EH1 R HH AA1 L AH0 D EY2 Z W IH0 DH IH1 N DH AH0 K AH1 N T R IY0 DH IH1 S Y IH1 R} |
Using cache found in /root/.cache/torch/hub/seungwonpark_melgan_master
Traceback (most recent call last):
  File "synthesize.py", line 94, in <module>
    synthesize(model, text, sentence, prefix='step_{}'.format(args.step))
  File "synthesize.py", line 64, in synthesize
    melgan = utils.get_melgan()
  File "/app/utils.py", line 132, in get_melgan
    melgan = torch.hub.load('seungwonpark/melgan', 'melgan')
  File "/usr/local/lib/python3.7/site-packages/torch/hub.py", line 362, in load
    model = entry(*args, **kwargs)
  File "/root/.cache/torch/hub/seungwonpark_melgan_master/hubconf.py", line 19, in melgan
    progress=progress)
  File "/usr/local/lib/python3.7/site-packages/torch/hub.py", line 502, in load_state_dict_from_url
    return torch.load(cached_file, map_location=map_location)
  File "/usr/local/lib/python3.7/site-packages/torch/serialization.py", line 580, in load
    return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
  File "/usr/local/lib/python3.7/site-packages/torch/serialization.py", line 760, in _legacy_load
    result = unpickler.load()
  File "/usr/local/lib/python3.7/site-packages/torch/serialization.py", line 716, in persistent_load
    deserialized_objects[root_key] = restore_location(obj, location)
  File "/usr/local/lib/python3.7/site-packages/torch/serialization.py", line 174, in default_restore_location
    result = fn(storage, location)
  File "/usr/local/lib/python3.7/site-packages/torch/serialization.py", line 150, in _cuda_deserialize
    device = validate_cuda_device(location)
  File "/usr/local/lib/python3.7/site-packages/torch/serialization.py", line 134, in validate_cuda_device
    raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

and If I do in utils.py:

def get_melgan():
    if torch.cuda.is_available():
        melgan = torch.hub.load('seungwonpark/melgan', 'melgan')
    else:
        # nvidia's waveglow models explicitly don't work on cpu.
        melgan = torch.hub.load('seungwonpark/melgan', 'melgan', map_location=torch.device('cpu'))
    melgan.eval()
    return melgan

I now get

root@b832c651a4bc:/app# python synthesize.py --step 300000
|{DH AH0 N EY1 SH AH0 N Z T UH1 R IH2 Z AH0 M M IH1 N AH0 S T ER0 HH AE1 Z AO1 L S OW0 EH0 N K ER1 IH0 JH D AO2 S T R EY1 L Y AH0 N Z T UW1 T EY1 K DH EH1 R HH AA1 L AH0 D EY2 Z W IH0 DH IH1 N DH AH0 K AH1 N T R IY0 DH IH1 S Y IH1 R} |
Using cache found in /root/.cache/torch/hub/seungwonpark_melgan_master
Traceback (most recent call last):
  File "synthesize.py", line 94, in <module>
    synthesize(model, text, sentence, prefix='step_{}'.format(args.step))
  File "synthesize.py", line 64, in synthesize
    melgan = utils.get_melgan()
  File "/app/utils.py", line 133, in get_melgan
    melgan = torch.hub.load('seungwonpark/melgan', 'melgan', map_location='cpu')
  File "/usr/local/lib/python3.7/site-packages/torch/hub.py", line 362, in load
    model = entry(*args, **kwargs)
TypeError: melgan() got an unexpected keyword argument 'map_location'

So an alternative approach to pass map_location was this one

def get_melgan():
    if torch.cuda.is_available():
        melgan = torch.hub.load('seungwonpark/melgan', 'melgan')
        melgan.eval()
    else:
        # nvidia's waveglow models explicitly don't work on cpu.
        melgan = torch.hub.load('seungwonpark/melgan', 'melgan', pretrained=False)
        checkpoint = torch.hub.load_state_dict_from_url('file:///root/.cache/torch/hub/checkpoints/nvidia_tacotron2_LJ11_epoch6400.pt ', map_location="cpu")
        #state_dict = {key.replace("module.", ""): value for key, value in checkpoint["state_dict"].items()}
        melgan.load_state_dict(checkpoint)
        melgan.eval()
    return melgan

but it does not unwrap correctly the checkpoint's state dict.

@ming024
Copy link
Owner

ming024 commented Jul 25, 2020

@loretoparisi it is because in the MelGAN implementation the weight normalization parameters are removed when the model is set to inference mode.
https://github.com/seungwonpark/melgan/blob/aca59909f6dd028ec808f987b154535a7ca3400c/hubconf.py#L22

You can see this colab for a simple solution.

@ming024
Copy link
Owner

ming024 commented Jul 26, 2020

closed #7

@ming024 ming024 closed this as completed Jul 26, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants